00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2454 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3719 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.116 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.117 The recommended git tool is: git 00:00:00.117 using credential 00000000-0000-0000-0000-000000000002 00:00:00.119 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.171 Fetching changes from the remote Git repository 00:00:00.173 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.221 Using shallow fetch with depth 1 00:00:00.221 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.221 > git --version # timeout=10 00:00:00.252 > git --version # 'git version 2.39.2' 00:00:00.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.272 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.272 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.318 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.333 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.345 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.345 > git config core.sparsecheckout # timeout=10 00:00:07.356 > git read-tree -mu HEAD # timeout=10 00:00:07.371 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.399 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.399 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.478 [Pipeline] Start of Pipeline 00:00:07.490 [Pipeline] library 00:00:07.491 Loading library shm_lib@master 00:00:07.491 Library shm_lib@master is cached. Copying from home. 00:00:07.506 [Pipeline] node 00:00:07.530 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.532 [Pipeline] { 00:00:07.543 [Pipeline] catchError 00:00:07.544 [Pipeline] { 00:00:07.560 [Pipeline] wrap 00:00:07.571 [Pipeline] { 00:00:07.581 [Pipeline] stage 00:00:07.583 [Pipeline] { (Prologue) 00:00:07.806 [Pipeline] sh 00:00:08.648 + logger -p user.info -t JENKINS-CI 00:00:08.683 [Pipeline] echo 00:00:08.686 Node: WFP4 00:00:08.695 [Pipeline] sh 00:00:09.033 [Pipeline] setCustomBuildProperty 00:00:09.045 [Pipeline] echo 00:00:09.046 Cleanup processes 00:00:09.052 [Pipeline] sh 00:00:09.343 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.343 6579 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.355 [Pipeline] sh 00:00:09.683 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.683 ++ grep -v 'sudo pgrep' 00:00:09.683 ++ awk '{print $1}' 00:00:09.683 + sudo kill -9 00:00:09.683 + true 00:00:09.697 [Pipeline] cleanWs 00:00:09.705 [WS-CLEANUP] Deleting project workspace... 00:00:09.705 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.717 [WS-CLEANUP] done 00:00:09.721 [Pipeline] setCustomBuildProperty 00:00:09.734 [Pipeline] sh 00:00:10.019 + sudo git config --global --replace-all safe.directory '*' 00:00:10.118 [Pipeline] httpRequest 00:00:12.310 [Pipeline] echo 00:00:12.312 Sorcerer 10.211.164.20 is alive 00:00:12.322 [Pipeline] retry 00:00:12.325 [Pipeline] { 00:00:12.339 [Pipeline] httpRequest 00:00:12.344 HttpMethod: GET 00:00:12.344 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.346 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.378 Response Code: HTTP/1.1 200 OK 00:00:12.379 Success: Status code 200 is in the accepted range: 200,404 00:00:12.379 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:23.893 [Pipeline] } 00:00:23.910 [Pipeline] // retry 00:00:23.918 [Pipeline] sh 00:00:24.211 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.229 [Pipeline] httpRequest 00:00:24.615 [Pipeline] echo 00:00:24.616 Sorcerer 10.211.164.20 is alive 00:00:24.626 [Pipeline] retry 00:00:24.628 [Pipeline] { 00:00:24.642 [Pipeline] httpRequest 00:00:24.647 HttpMethod: GET 00:00:24.647 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:24.648 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:24.674 Response Code: HTTP/1.1 200 OK 00:00:24.674 Success: Status code 200 is in the accepted range: 200,404 00:00:24.675 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:20.610 [Pipeline] } 00:01:20.625 [Pipeline] // retry 00:01:20.637 [Pipeline] sh 00:01:20.948 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:23.504 [Pipeline] sh 00:01:23.792 + git -C spdk log --oneline -n5 00:01:23.792 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:23.792 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:23.792 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:23.792 66289a6db build: use VERSION file for storing version 00:01:23.792 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:23.812 [Pipeline] withCredentials 00:01:23.823 > git --version # timeout=10 00:01:23.836 > git --version # 'git version 2.39.2' 00:01:23.860 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:23.862 [Pipeline] { 00:01:23.872 [Pipeline] retry 00:01:23.874 [Pipeline] { 00:01:23.889 [Pipeline] sh 00:01:24.401 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:24.675 [Pipeline] } 00:01:24.693 [Pipeline] // retry 00:01:24.698 [Pipeline] } 00:01:24.713 [Pipeline] // withCredentials 00:01:24.723 [Pipeline] httpRequest 00:01:25.137 [Pipeline] echo 00:01:25.138 Sorcerer 10.211.164.20 is alive 00:01:25.147 [Pipeline] retry 00:01:25.149 [Pipeline] { 00:01:25.169 [Pipeline] httpRequest 00:01:25.180 HttpMethod: GET 00:01:25.184 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:25.186 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:25.193 Response Code: HTTP/1.1 200 OK 00:01:25.194 Success: Status code 200 is in the accepted range: 200,404 00:01:25.198 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:31.744 [Pipeline] } 00:01:31.757 [Pipeline] // retry 00:01:31.762 [Pipeline] sh 00:01:32.048 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:33.440 [Pipeline] sh 00:01:33.726 + git -C dpdk log --oneline -n5 00:01:33.726 caf0f5d395 version: 22.11.4 00:01:33.726 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:33.726 dc9c799c7d vhost: fix missing spinlock unlock 00:01:33.726 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:33.726 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:33.736 [Pipeline] } 00:01:33.749 [Pipeline] // stage 00:01:33.757 [Pipeline] stage 00:01:33.758 [Pipeline] { (Prepare) 00:01:33.776 [Pipeline] writeFile 00:01:33.791 [Pipeline] sh 00:01:34.077 + logger -p user.info -t JENKINS-CI 00:01:34.090 [Pipeline] sh 00:01:34.374 + logger -p user.info -t JENKINS-CI 00:01:34.385 [Pipeline] sh 00:01:34.669 + cat autorun-spdk.conf 00:01:34.669 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.669 SPDK_TEST_NVMF=1 00:01:34.669 SPDK_TEST_NVME_CLI=1 00:01:34.669 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.669 SPDK_TEST_NVMF_NICS=e810 00:01:34.669 SPDK_TEST_VFIOUSER=1 00:01:34.669 SPDK_RUN_UBSAN=1 00:01:34.669 NET_TYPE=phy 00:01:34.669 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:34.669 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:34.677 RUN_NIGHTLY=1 00:01:34.681 [Pipeline] readFile 00:01:34.712 [Pipeline] withEnv 00:01:34.714 [Pipeline] { 00:01:34.725 [Pipeline] sh 00:01:35.013 + set -ex 00:01:35.013 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:35.013 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.013 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.013 ++ SPDK_TEST_NVMF=1 00:01:35.013 ++ SPDK_TEST_NVME_CLI=1 00:01:35.013 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.013 ++ SPDK_TEST_NVMF_NICS=e810 00:01:35.013 ++ SPDK_TEST_VFIOUSER=1 00:01:35.013 ++ SPDK_RUN_UBSAN=1 00:01:35.013 ++ NET_TYPE=phy 00:01:35.013 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:35.013 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.013 ++ RUN_NIGHTLY=1 00:01:35.013 + case $SPDK_TEST_NVMF_NICS in 00:01:35.013 + DRIVERS=ice 00:01:35.013 + [[ tcp == \r\d\m\a ]] 00:01:35.013 + [[ -n ice ]] 00:01:35.013 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:35.013 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:35.013 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:35.013 rmmod: ERROR: Module i40iw is not currently loaded 00:01:35.013 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:35.013 + true 00:01:35.013 + for D in $DRIVERS 00:01:35.013 + sudo modprobe ice 00:01:35.013 + exit 0 00:01:35.022 [Pipeline] } 00:01:35.036 [Pipeline] // withEnv 00:01:35.040 [Pipeline] } 00:01:35.053 [Pipeline] // stage 00:01:35.062 [Pipeline] catchError 00:01:35.064 [Pipeline] { 00:01:35.077 [Pipeline] timeout 00:01:35.077 Timeout set to expire in 1 hr 0 min 00:01:35.078 [Pipeline] { 00:01:35.090 [Pipeline] stage 00:01:35.091 [Pipeline] { (Tests) 00:01:35.104 [Pipeline] sh 00:01:35.393 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.393 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.393 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.393 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:35.393 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:35.393 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:35.393 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:35.393 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:35.393 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:35.393 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:35.393 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:35.393 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.393 + source /etc/os-release 00:01:35.393 ++ NAME='Fedora Linux' 00:01:35.393 ++ VERSION='39 (Cloud Edition)' 00:01:35.393 ++ ID=fedora 00:01:35.393 ++ VERSION_ID=39 00:01:35.393 ++ VERSION_CODENAME= 00:01:35.393 ++ PLATFORM_ID=platform:f39 00:01:35.393 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:35.393 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:35.393 ++ LOGO=fedora-logo-icon 00:01:35.393 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:35.393 ++ HOME_URL=https://fedoraproject.org/ 00:01:35.393 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:35.393 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:35.394 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:35.394 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:35.394 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:35.394 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:35.394 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:35.394 ++ SUPPORT_END=2024-11-12 00:01:35.394 ++ VARIANT='Cloud Edition' 00:01:35.394 ++ VARIANT_ID=cloud 00:01:35.394 + uname -a 00:01:35.394 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:35.394 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:37.937 Hugepages 00:01:37.937 node hugesize free / total 00:01:37.937 node0 1048576kB 0 / 0 00:01:37.937 node0 2048kB 0 / 0 00:01:37.937 node1 1048576kB 0 / 0 00:01:37.937 node1 2048kB 0 / 0 00:01:37.937 00:01:37.937 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:37.937 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:37.937 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:37.937 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:37.937 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:37.938 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:37.938 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:37.938 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:37.938 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:37.938 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:37.938 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:37.938 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:37.938 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:37.938 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:37.938 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:37.938 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:37.938 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:37.938 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:37.938 + rm -f /tmp/spdk-ld-path 00:01:37.938 + source autorun-spdk.conf 00:01:37.938 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.938 ++ SPDK_TEST_NVMF=1 00:01:37.938 ++ SPDK_TEST_NVME_CLI=1 00:01:37.938 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.938 ++ SPDK_TEST_NVMF_NICS=e810 00:01:37.938 ++ SPDK_TEST_VFIOUSER=1 00:01:37.938 ++ SPDK_RUN_UBSAN=1 00:01:37.938 ++ NET_TYPE=phy 00:01:37.938 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:37.938 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:37.938 ++ RUN_NIGHTLY=1 00:01:37.938 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:37.938 + [[ -n '' ]] 00:01:37.938 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:37.938 + for M in /var/spdk/build-*-manifest.txt 00:01:37.938 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:37.938 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.938 + for M in /var/spdk/build-*-manifest.txt 00:01:37.938 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:37.938 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.938 + for M in /var/spdk/build-*-manifest.txt 00:01:37.938 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:37.938 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.938 ++ uname 00:01:37.938 + [[ Linux == \L\i\n\u\x ]] 00:01:37.938 + sudo dmesg -T 00:01:37.938 + sudo dmesg --clear 00:01:37.938 + dmesg_pid=7539 00:01:37.938 + [[ Fedora Linux == FreeBSD ]] 00:01:37.938 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.938 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.938 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:37.938 + sudo dmesg -Tw 00:01:37.938 + [[ -x /usr/src/fio-static/fio ]] 00:01:37.938 + export FIO_BIN=/usr/src/fio-static/fio 00:01:37.938 + FIO_BIN=/usr/src/fio-static/fio 00:01:37.938 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:37.938 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:37.938 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:37.938 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.938 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.938 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:37.938 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.938 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.938 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.938 05:17:37 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:37.938 05:17:37 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.938 05:17:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.938 05:17:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:37.938 05:17:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:37.938 05:17:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.938 05:17:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:37.938 05:17:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:37.938 05:17:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:37.938 05:17:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:37.938 05:17:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:37.938 05:17:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:37.938 05:17:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:37.938 05:17:37 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:37.938 05:17:37 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:38.198 05:17:37 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:38.198 05:17:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:38.198 05:17:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:38.198 05:17:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:38.198 05:17:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:38.198 05:17:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:38.198 05:17:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.198 05:17:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.198 05:17:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.198 05:17:37 -- paths/export.sh@5 -- $ export PATH 00:01:38.198 05:17:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.198 05:17:37 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:38.199 05:17:38 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:38.199 05:17:38 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734063458.XXXXXX 00:01:38.199 05:17:38 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734063458.VXd6uM 00:01:38.199 05:17:38 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:38.199 05:17:38 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:01:38.199 05:17:38 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:38.199 05:17:38 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:38.199 05:17:38 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:38.199 05:17:38 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:38.199 05:17:38 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:38.199 05:17:38 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:38.199 05:17:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.199 05:17:38 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:38.199 05:17:38 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:38.199 05:17:38 -- pm/common@17 -- $ local monitor 00:01:38.199 05:17:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.199 05:17:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.199 05:17:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.199 05:17:38 -- pm/common@21 -- $ date +%s 00:01:38.199 05:17:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.199 05:17:38 -- pm/common@21 -- $ date +%s 00:01:38.199 05:17:38 -- pm/common@25 -- $ sleep 1 00:01:38.199 05:17:38 -- pm/common@21 -- $ date +%s 00:01:38.199 05:17:38 -- pm/common@21 -- $ date +%s 00:01:38.199 05:17:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734063458 00:01:38.199 05:17:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734063458 00:01:38.199 05:17:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734063458 00:01:38.199 05:17:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734063458 00:01:38.199 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734063458_collect-cpu-temp.pm.log 00:01:38.199 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734063458_collect-cpu-load.pm.log 00:01:38.199 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734063458_collect-vmstat.pm.log 00:01:38.199 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734063458_collect-bmc-pm.bmc.pm.log 00:01:39.138 05:17:39 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:39.138 05:17:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:39.138 05:17:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:39.138 05:17:39 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.138 05:17:39 -- spdk/autobuild.sh@16 -- $ date -u 00:01:39.138 Fri Dec 13 04:17:39 AM UTC 2024 00:01:39.138 05:17:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:39.138 v25.01-rc1-2-ge01cb43b8 00:01:39.138 05:17:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:39.138 05:17:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:39.138 05:17:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:39.138 05:17:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:39.138 05:17:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:39.138 05:17:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.138 ************************************ 00:01:39.138 START TEST ubsan 00:01:39.138 ************************************ 00:01:39.138 05:17:39 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:39.138 using ubsan 00:01:39.138 00:01:39.138 real 0m0.000s 00:01:39.138 user 0m0.000s 00:01:39.138 sys 0m0.000s 00:01:39.138 05:17:39 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:39.138 05:17:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:39.138 ************************************ 00:01:39.138 END TEST ubsan 00:01:39.138 ************************************ 00:01:39.138 05:17:39 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:39.138 05:17:39 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:39.138 05:17:39 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:39.138 05:17:39 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:39.138 05:17:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:39.138 05:17:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.399 ************************************ 00:01:39.399 START TEST build_native_dpdk 00:01:39.399 ************************************ 00:01:39.399 05:17:39 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:39.399 caf0f5d395 version: 22.11.4 00:01:39.399 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:39.399 dc9c799c7d vhost: fix missing spinlock unlock 00:01:39.399 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:39.399 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:01:39.399 patching file config/rte_config.h 00:01:39.399 Hunk #1 succeeded at 60 (offset 1 line). 00:01:39.399 05:17:39 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:39.399 05:17:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:39.400 05:17:39 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:01:39.400 patching file lib/pcapng/rte_pcapng.c 00:01:39.400 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:39.400 05:17:39 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:39.400 05:17:39 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:39.400 05:17:39 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:01:39.400 05:17:39 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:01:39.400 05:17:39 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:01:39.400 05:17:39 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:01:39.400 05:17:39 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:45.976 The Meson build system 00:01:45.976 Version: 1.5.0 00:01:45.976 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:45.976 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:45.976 Build type: native build 00:01:45.976 Program cat found: YES (/usr/bin/cat) 00:01:45.976 Project name: DPDK 00:01:45.976 Project version: 22.11.4 00:01:45.976 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:45.976 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:45.976 Host machine cpu family: x86_64 00:01:45.976 Host machine cpu: x86_64 00:01:45.976 Message: ## Building in Developer Mode ## 00:01:45.976 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:45.976 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:45.976 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:45.976 Program objdump found: YES (/usr/bin/objdump) 00:01:45.976 Program python3 found: YES (/usr/bin/python3) 00:01:45.976 Program cat found: YES (/usr/bin/cat) 00:01:45.976 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:45.976 Checking for size of "void *" : 8 00:01:45.976 Checking for size of "void *" : 8 (cached) 00:01:45.976 Library m found: YES 00:01:45.976 Library numa found: YES 00:01:45.976 Has header "numaif.h" : YES 00:01:45.976 Library fdt found: NO 00:01:45.976 Library execinfo found: NO 00:01:45.976 Has header "execinfo.h" : YES 00:01:45.976 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:45.976 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:45.976 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:45.976 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:45.976 Run-time dependency openssl found: YES 3.1.1 00:01:45.976 Run-time dependency libpcap found: YES 1.10.4 00:01:45.976 Has header "pcap.h" with dependency libpcap: YES 00:01:45.976 Compiler for C supports arguments -Wcast-qual: YES 00:01:45.976 Compiler for C supports arguments -Wdeprecated: YES 00:01:45.976 Compiler for C supports arguments -Wformat: YES 00:01:45.976 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:45.976 Compiler for C supports arguments -Wformat-security: NO 00:01:45.976 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.976 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:45.976 Compiler for C supports arguments -Wnested-externs: YES 00:01:45.976 Compiler for C supports arguments -Wold-style-definition: YES 00:01:45.976 Compiler for C supports arguments -Wpointer-arith: YES 00:01:45.976 Compiler for C supports arguments -Wsign-compare: YES 00:01:45.976 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:45.976 Compiler for C supports arguments -Wundef: YES 00:01:45.976 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.976 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:45.976 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:45.976 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.976 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:45.976 Compiler for C supports arguments -mavx512f: YES 00:01:45.976 Checking if "AVX512 checking" compiles: YES 00:01:45.976 Fetching value of define "__SSE4_2__" : 1 00:01:45.976 Fetching value of define "__AES__" : 1 00:01:45.976 Fetching value of define "__AVX__" : 1 00:01:45.976 Fetching value of define "__AVX2__" : 1 00:01:45.976 Fetching value of define "__AVX512BW__" : 1 00:01:45.976 Fetching value of define "__AVX512CD__" : 1 00:01:45.976 Fetching value of define "__AVX512DQ__" : 1 00:01:45.976 Fetching value of define "__AVX512F__" : 1 00:01:45.976 Fetching value of define "__AVX512VL__" : 1 00:01:45.976 Fetching value of define "__PCLMUL__" : 1 00:01:45.976 Fetching value of define "__RDRND__" : 1 00:01:45.976 Fetching value of define "__RDSEED__" : 1 00:01:45.976 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:45.976 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:45.976 Message: lib/kvargs: Defining dependency "kvargs" 00:01:45.976 Message: lib/telemetry: Defining dependency "telemetry" 00:01:45.976 Checking for function "getentropy" : YES 00:01:45.976 Message: lib/eal: Defining dependency "eal" 00:01:45.976 Message: lib/ring: Defining dependency "ring" 00:01:45.976 Message: lib/rcu: Defining dependency "rcu" 00:01:45.976 Message: lib/mempool: Defining dependency "mempool" 00:01:45.976 Message: lib/mbuf: Defining dependency "mbuf" 00:01:45.976 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:45.976 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.976 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:45.976 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:45.977 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:45.977 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:45.977 Compiler for C supports arguments -mpclmul: YES 00:01:45.977 Compiler for C supports arguments -maes: YES 00:01:45.977 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.977 Compiler for C supports arguments -mavx512bw: YES 00:01:45.977 Compiler for C supports arguments -mavx512dq: YES 00:01:45.977 Compiler for C supports arguments -mavx512vl: YES 00:01:45.977 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:45.977 Compiler for C supports arguments -mavx2: YES 00:01:45.977 Compiler for C supports arguments -mavx: YES 00:01:45.977 Message: lib/net: Defining dependency "net" 00:01:45.977 Message: lib/meter: Defining dependency "meter" 00:01:45.977 Message: lib/ethdev: Defining dependency "ethdev" 00:01:45.977 Message: lib/pci: Defining dependency "pci" 00:01:45.977 Message: lib/cmdline: Defining dependency "cmdline" 00:01:45.977 Message: lib/metrics: Defining dependency "metrics" 00:01:45.977 Message: lib/hash: Defining dependency "hash" 00:01:45.977 Message: lib/timer: Defining dependency "timer" 00:01:45.977 Fetching value of define "__AVX2__" : 1 (cached) 00:01:45.977 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.977 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:45.977 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:45.977 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:45.977 Message: lib/acl: Defining dependency "acl" 00:01:45.977 Message: lib/bbdev: Defining dependency "bbdev" 00:01:45.977 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:45.977 Run-time dependency libelf found: YES 0.191 00:01:45.977 Message: lib/bpf: Defining dependency "bpf" 00:01:45.977 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:45.977 Message: lib/compressdev: Defining dependency "compressdev" 00:01:45.977 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:45.977 Message: lib/distributor: Defining dependency "distributor" 00:01:45.977 Message: lib/efd: Defining dependency "efd" 00:01:45.977 Message: lib/eventdev: Defining dependency "eventdev" 00:01:45.977 Message: lib/gpudev: Defining dependency "gpudev" 00:01:45.977 Message: lib/gro: Defining dependency "gro" 00:01:45.977 Message: lib/gso: Defining dependency "gso" 00:01:45.977 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:45.977 Message: lib/jobstats: Defining dependency "jobstats" 00:01:45.977 Message: lib/latencystats: Defining dependency "latencystats" 00:01:45.977 Message: lib/lpm: Defining dependency "lpm" 00:01:45.977 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.977 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:45.977 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:45.977 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:45.977 Message: lib/member: Defining dependency "member" 00:01:45.977 Message: lib/pcapng: Defining dependency "pcapng" 00:01:45.977 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:45.977 Message: lib/power: Defining dependency "power" 00:01:45.977 Message: lib/rawdev: Defining dependency "rawdev" 00:01:45.977 Message: lib/regexdev: Defining dependency "regexdev" 00:01:45.977 Message: lib/dmadev: Defining dependency "dmadev" 00:01:45.977 Message: lib/rib: Defining dependency "rib" 00:01:45.977 Message: lib/reorder: Defining dependency "reorder" 00:01:45.977 Message: lib/sched: Defining dependency "sched" 00:01:45.977 Message: lib/security: Defining dependency "security" 00:01:45.977 Message: lib/stack: Defining dependency "stack" 00:01:45.977 Has header "linux/userfaultfd.h" : YES 00:01:45.977 Message: lib/vhost: Defining dependency "vhost" 00:01:45.977 Message: lib/ipsec: Defining dependency "ipsec" 00:01:45.977 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.977 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:45.977 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:45.977 Message: lib/fib: Defining dependency "fib" 00:01:45.977 Message: lib/port: Defining dependency "port" 00:01:45.977 Message: lib/pdump: Defining dependency "pdump" 00:01:45.977 Message: lib/table: Defining dependency "table" 00:01:45.977 Message: lib/pipeline: Defining dependency "pipeline" 00:01:45.977 Message: lib/graph: Defining dependency "graph" 00:01:45.977 Message: lib/node: Defining dependency "node" 00:01:45.977 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:45.977 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:45.977 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:45.977 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:45.977 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:45.977 Compiler for C supports arguments -Wno-unused-value: YES 00:01:45.977 Compiler for C supports arguments -Wno-format: YES 00:01:45.977 Compiler for C supports arguments -Wno-format-security: YES 00:01:45.977 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:46.546 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:46.546 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:46.546 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:46.546 Fetching value of define "__AVX2__" : 1 (cached) 00:01:46.546 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.547 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.547 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.547 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:46.547 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:46.547 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:46.547 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:46.547 Configuring doxy-api.conf using configuration 00:01:46.547 Program sphinx-build found: NO 00:01:46.547 Configuring rte_build_config.h using configuration 00:01:46.547 Message: 00:01:46.547 ================= 00:01:46.547 Applications Enabled 00:01:46.547 ================= 00:01:46.547 00:01:46.547 apps: 00:01:46.547 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:46.547 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:46.547 test-security-perf, 00:01:46.547 00:01:46.547 Message: 00:01:46.547 ================= 00:01:46.547 Libraries Enabled 00:01:46.547 ================= 00:01:46.547 00:01:46.547 libs: 00:01:46.547 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:46.547 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:46.547 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:46.547 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:46.547 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:46.547 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:46.547 table, pipeline, graph, node, 00:01:46.547 00:01:46.547 Message: 00:01:46.547 =============== 00:01:46.547 Drivers Enabled 00:01:46.547 =============== 00:01:46.547 00:01:46.547 common: 00:01:46.547 00:01:46.547 bus: 00:01:46.547 pci, vdev, 00:01:46.547 mempool: 00:01:46.547 ring, 00:01:46.547 dma: 00:01:46.547 00:01:46.547 net: 00:01:46.547 i40e, 00:01:46.547 raw: 00:01:46.547 00:01:46.547 crypto: 00:01:46.547 00:01:46.547 compress: 00:01:46.547 00:01:46.547 regex: 00:01:46.547 00:01:46.547 vdpa: 00:01:46.547 00:01:46.547 event: 00:01:46.547 00:01:46.547 baseband: 00:01:46.547 00:01:46.547 gpu: 00:01:46.547 00:01:46.547 00:01:46.547 Message: 00:01:46.547 ================= 00:01:46.547 Content Skipped 00:01:46.547 ================= 00:01:46.547 00:01:46.547 apps: 00:01:46.547 00:01:46.547 libs: 00:01:46.547 kni: explicitly disabled via build config (deprecated lib) 00:01:46.547 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:46.547 00:01:46.547 drivers: 00:01:46.547 common/cpt: not in enabled drivers build config 00:01:46.547 common/dpaax: not in enabled drivers build config 00:01:46.547 common/iavf: not in enabled drivers build config 00:01:46.547 common/idpf: not in enabled drivers build config 00:01:46.547 common/mvep: not in enabled drivers build config 00:01:46.547 common/octeontx: not in enabled drivers build config 00:01:46.547 bus/auxiliary: not in enabled drivers build config 00:01:46.547 bus/dpaa: not in enabled drivers build config 00:01:46.547 bus/fslmc: not in enabled drivers build config 00:01:46.547 bus/ifpga: not in enabled drivers build config 00:01:46.547 bus/vmbus: not in enabled drivers build config 00:01:46.547 common/cnxk: not in enabled drivers build config 00:01:46.547 common/mlx5: not in enabled drivers build config 00:01:46.547 common/qat: not in enabled drivers build config 00:01:46.547 common/sfc_efx: not in enabled drivers build config 00:01:46.547 mempool/bucket: not in enabled drivers build config 00:01:46.547 mempool/cnxk: not in enabled drivers build config 00:01:46.547 mempool/dpaa: not in enabled drivers build config 00:01:46.547 mempool/dpaa2: not in enabled drivers build config 00:01:46.547 mempool/octeontx: not in enabled drivers build config 00:01:46.547 mempool/stack: not in enabled drivers build config 00:01:46.547 dma/cnxk: not in enabled drivers build config 00:01:46.547 dma/dpaa: not in enabled drivers build config 00:01:46.547 dma/dpaa2: not in enabled drivers build config 00:01:46.547 dma/hisilicon: not in enabled drivers build config 00:01:46.547 dma/idxd: not in enabled drivers build config 00:01:46.547 dma/ioat: not in enabled drivers build config 00:01:46.547 dma/skeleton: not in enabled drivers build config 00:01:46.547 net/af_packet: not in enabled drivers build config 00:01:46.547 net/af_xdp: not in enabled drivers build config 00:01:46.547 net/ark: not in enabled drivers build config 00:01:46.547 net/atlantic: not in enabled drivers build config 00:01:46.547 net/avp: not in enabled drivers build config 00:01:46.547 net/axgbe: not in enabled drivers build config 00:01:46.547 net/bnx2x: not in enabled drivers build config 00:01:46.547 net/bnxt: not in enabled drivers build config 00:01:46.547 net/bonding: not in enabled drivers build config 00:01:46.547 net/cnxk: not in enabled drivers build config 00:01:46.547 net/cxgbe: not in enabled drivers build config 00:01:46.547 net/dpaa: not in enabled drivers build config 00:01:46.547 net/dpaa2: not in enabled drivers build config 00:01:46.547 net/e1000: not in enabled drivers build config 00:01:46.547 net/ena: not in enabled drivers build config 00:01:46.547 net/enetc: not in enabled drivers build config 00:01:46.547 net/enetfec: not in enabled drivers build config 00:01:46.547 net/enic: not in enabled drivers build config 00:01:46.547 net/failsafe: not in enabled drivers build config 00:01:46.547 net/fm10k: not in enabled drivers build config 00:01:46.547 net/gve: not in enabled drivers build config 00:01:46.547 net/hinic: not in enabled drivers build config 00:01:46.547 net/hns3: not in enabled drivers build config 00:01:46.547 net/iavf: not in enabled drivers build config 00:01:46.547 net/ice: not in enabled drivers build config 00:01:46.547 net/idpf: not in enabled drivers build config 00:01:46.547 net/igc: not in enabled drivers build config 00:01:46.547 net/ionic: not in enabled drivers build config 00:01:46.547 net/ipn3ke: not in enabled drivers build config 00:01:46.547 net/ixgbe: not in enabled drivers build config 00:01:46.547 net/kni: not in enabled drivers build config 00:01:46.547 net/liquidio: not in enabled drivers build config 00:01:46.547 net/mana: not in enabled drivers build config 00:01:46.547 net/memif: not in enabled drivers build config 00:01:46.547 net/mlx4: not in enabled drivers build config 00:01:46.547 net/mlx5: not in enabled drivers build config 00:01:46.547 net/mvneta: not in enabled drivers build config 00:01:46.547 net/mvpp2: not in enabled drivers build config 00:01:46.547 net/netvsc: not in enabled drivers build config 00:01:46.547 net/nfb: not in enabled drivers build config 00:01:46.547 net/nfp: not in enabled drivers build config 00:01:46.547 net/ngbe: not in enabled drivers build config 00:01:46.547 net/null: not in enabled drivers build config 00:01:46.547 net/octeontx: not in enabled drivers build config 00:01:46.547 net/octeon_ep: not in enabled drivers build config 00:01:46.547 net/pcap: not in enabled drivers build config 00:01:46.547 net/pfe: not in enabled drivers build config 00:01:46.547 net/qede: not in enabled drivers build config 00:01:46.547 net/ring: not in enabled drivers build config 00:01:46.547 net/sfc: not in enabled drivers build config 00:01:46.547 net/softnic: not in enabled drivers build config 00:01:46.547 net/tap: not in enabled drivers build config 00:01:46.547 net/thunderx: not in enabled drivers build config 00:01:46.547 net/txgbe: not in enabled drivers build config 00:01:46.547 net/vdev_netvsc: not in enabled drivers build config 00:01:46.547 net/vhost: not in enabled drivers build config 00:01:46.547 net/virtio: not in enabled drivers build config 00:01:46.547 net/vmxnet3: not in enabled drivers build config 00:01:46.547 raw/cnxk_bphy: not in enabled drivers build config 00:01:46.547 raw/cnxk_gpio: not in enabled drivers build config 00:01:46.547 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:46.547 raw/ifpga: not in enabled drivers build config 00:01:46.547 raw/ntb: not in enabled drivers build config 00:01:46.547 raw/skeleton: not in enabled drivers build config 00:01:46.547 crypto/armv8: not in enabled drivers build config 00:01:46.547 crypto/bcmfs: not in enabled drivers build config 00:01:46.547 crypto/caam_jr: not in enabled drivers build config 00:01:46.547 crypto/ccp: not in enabled drivers build config 00:01:46.547 crypto/cnxk: not in enabled drivers build config 00:01:46.547 crypto/dpaa_sec: not in enabled drivers build config 00:01:46.547 crypto/dpaa2_sec: not in enabled drivers build config 00:01:46.547 crypto/ipsec_mb: not in enabled drivers build config 00:01:46.547 crypto/mlx5: not in enabled drivers build config 00:01:46.547 crypto/mvsam: not in enabled drivers build config 00:01:46.547 crypto/nitrox: not in enabled drivers build config 00:01:46.547 crypto/null: not in enabled drivers build config 00:01:46.547 crypto/octeontx: not in enabled drivers build config 00:01:46.547 crypto/openssl: not in enabled drivers build config 00:01:46.547 crypto/scheduler: not in enabled drivers build config 00:01:46.547 crypto/uadk: not in enabled drivers build config 00:01:46.547 crypto/virtio: not in enabled drivers build config 00:01:46.547 compress/isal: not in enabled drivers build config 00:01:46.547 compress/mlx5: not in enabled drivers build config 00:01:46.547 compress/octeontx: not in enabled drivers build config 00:01:46.547 compress/zlib: not in enabled drivers build config 00:01:46.547 regex/mlx5: not in enabled drivers build config 00:01:46.547 regex/cn9k: not in enabled drivers build config 00:01:46.547 vdpa/ifc: not in enabled drivers build config 00:01:46.547 vdpa/mlx5: not in enabled drivers build config 00:01:46.547 vdpa/sfc: not in enabled drivers build config 00:01:46.547 event/cnxk: not in enabled drivers build config 00:01:46.547 event/dlb2: not in enabled drivers build config 00:01:46.547 event/dpaa: not in enabled drivers build config 00:01:46.547 event/dpaa2: not in enabled drivers build config 00:01:46.547 event/dsw: not in enabled drivers build config 00:01:46.547 event/opdl: not in enabled drivers build config 00:01:46.547 event/skeleton: not in enabled drivers build config 00:01:46.547 event/sw: not in enabled drivers build config 00:01:46.547 event/octeontx: not in enabled drivers build config 00:01:46.547 baseband/acc: not in enabled drivers build config 00:01:46.547 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:46.547 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:46.547 baseband/la12xx: not in enabled drivers build config 00:01:46.548 baseband/null: not in enabled drivers build config 00:01:46.548 baseband/turbo_sw: not in enabled drivers build config 00:01:46.548 gpu/cuda: not in enabled drivers build config 00:01:46.548 00:01:46.548 00:01:46.548 Build targets in project: 311 00:01:46.548 00:01:46.548 DPDK 22.11.4 00:01:46.548 00:01:46.548 User defined options 00:01:46.548 libdir : lib 00:01:46.548 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:46.548 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:46.548 c_link_args : 00:01:46.548 enable_docs : false 00:01:46.548 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:46.548 enable_kmods : false 00:01:46.548 machine : native 00:01:46.548 tests : false 00:01:46.548 00:01:46.548 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.548 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:46.548 05:17:46 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:01:46.548 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:46.813 [1/740] Generating lib/rte_kvargs_def with a custom command 00:01:46.813 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:46.813 [3/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:46.813 [4/740] Generating lib/rte_telemetry_def with a custom command 00:01:46.813 [5/740] Generating lib/rte_eal_def with a custom command 00:01:46.813 [6/740] Generating lib/rte_ring_def with a custom command 00:01:46.813 [7/740] Generating lib/rte_mbuf_def with a custom command 00:01:46.813 [8/740] Generating lib/rte_mempool_def with a custom command 00:01:46.813 [9/740] Generating lib/rte_rcu_def with a custom command 00:01:46.813 [10/740] Generating lib/rte_eal_mingw with a custom command 00:01:46.813 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:46.813 [12/740] Generating lib/rte_mempool_mingw with a custom command 00:01:46.813 [13/740] Generating lib/rte_ring_mingw with a custom command 00:01:46.813 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:46.813 [15/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:46.813 [16/740] Generating lib/rte_rcu_mingw with a custom command 00:01:46.813 [17/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:46.813 [18/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:46.813 [19/740] Generating lib/rte_net_def with a custom command 00:01:46.813 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:46.813 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:46.813 [22/740] Generating lib/rte_net_mingw with a custom command 00:01:46.813 [23/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:46.813 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:46.813 [25/740] Generating lib/rte_meter_def with a custom command 00:01:46.813 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:46.813 [27/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:46.813 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:46.813 [29/740] Generating lib/rte_meter_mingw with a custom command 00:01:46.813 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:46.813 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:46.813 [32/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:46.813 [33/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:46.813 [34/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:46.813 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:46.813 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.087 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:47.087 [38/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:47.087 [39/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:47.087 [40/740] Generating lib/rte_ethdev_def with a custom command 00:01:47.087 [41/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.087 [42/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.087 [43/740] Linking static target lib/librte_kvargs.a 00:01:47.087 [44/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.087 [45/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.087 [46/740] Generating lib/rte_pci_def with a custom command 00:01:47.087 [47/740] Generating lib/rte_ethdev_mingw with a custom command 00:01:47.087 [48/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:47.087 [49/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:47.087 [50/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:47.087 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.087 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:47.087 [53/740] Generating lib/rte_pci_mingw with a custom command 00:01:47.087 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:47.087 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:47.087 [56/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.087 [57/740] Generating lib/rte_cmdline_def with a custom command 00:01:47.087 [58/740] Generating lib/rte_metrics_def with a custom command 00:01:47.087 [59/740] Generating lib/rte_cmdline_mingw with a custom command 00:01:47.087 [60/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:47.087 [61/740] Generating lib/rte_metrics_mingw with a custom command 00:01:47.087 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.087 [63/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.087 [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:47.087 [65/740] Generating lib/rte_hash_mingw with a custom command 00:01:47.087 [66/740] Generating lib/rte_hash_def with a custom command 00:01:47.087 [67/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:47.087 [68/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:47.087 [69/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:47.087 [70/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:47.087 [71/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:47.087 [72/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.087 [73/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:47.087 [74/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:47.087 [75/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:47.087 [76/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:47.087 [77/740] Linking static target lib/librte_meter.a 00:01:47.087 [78/740] Linking static target lib/librte_pci.a 00:01:47.088 [79/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.088 [80/740] Generating lib/rte_timer_def with a custom command 00:01:47.088 [81/740] Generating lib/rte_timer_mingw with a custom command 00:01:47.088 [82/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:47.088 [83/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:47.088 [84/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:47.088 [85/740] Generating lib/rte_acl_mingw with a custom command 00:01:47.088 [86/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:47.088 [87/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:47.088 [88/740] Generating lib/rte_bbdev_def with a custom command 00:01:47.088 [89/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.088 [90/740] Generating lib/rte_acl_def with a custom command 00:01:47.088 [91/740] Linking static target lib/librte_ring.a 00:01:47.088 [92/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:47.088 [93/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:47.088 [94/740] Generating lib/rte_bbdev_mingw with a custom command 00:01:47.088 [95/740] Generating lib/rte_bitratestats_def with a custom command 00:01:47.088 [96/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:47.088 [97/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:47.088 [98/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:47.088 [99/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.088 [100/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:47.088 [101/740] Generating lib/rte_bitratestats_mingw with a custom command 00:01:47.088 [102/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:47.088 [103/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:47.088 [104/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:47.088 [105/740] Generating lib/rte_bpf_def with a custom command 00:01:47.088 [106/740] Generating lib/rte_compressdev_def with a custom command 00:01:47.088 [107/740] Generating lib/rte_cfgfile_mingw with a custom command 00:01:47.088 [108/740] Generating lib/rte_bpf_mingw with a custom command 00:01:47.088 [109/740] Generating lib/rte_cfgfile_def with a custom command 00:01:47.088 [110/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:47.088 [111/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:47.353 [112/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:47.353 [113/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:47.353 [114/740] Generating lib/rte_compressdev_mingw with a custom command 00:01:47.353 [115/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.353 [116/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:47.353 [117/740] Generating lib/rte_cryptodev_def with a custom command 00:01:47.353 [118/740] Generating lib/rte_cryptodev_mingw with a custom command 00:01:47.353 [119/740] Generating lib/rte_distributor_def with a custom command 00:01:47.353 [120/740] Generating lib/rte_efd_def with a custom command 00:01:47.353 [121/740] Generating lib/rte_distributor_mingw with a custom command 00:01:47.353 [122/740] Generating lib/rte_efd_mingw with a custom command 00:01:47.353 [123/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.353 [124/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:47.353 [125/740] Generating lib/rte_eventdev_def with a custom command 00:01:47.353 [126/740] Generating lib/rte_eventdev_mingw with a custom command 00:01:47.353 [127/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:47.353 [128/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.353 [129/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:47.353 [130/740] Generating lib/rte_gpudev_mingw with a custom command 00:01:47.353 [131/740] Generating lib/rte_gpudev_def with a custom command 00:01:47.353 [132/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.353 [133/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:47.628 [134/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.628 [135/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:47.628 [136/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:47.628 [137/740] Linking target lib/librte_kvargs.so.23.0 00:01:47.628 [138/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:47.628 [139/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:47.628 [140/740] Generating lib/rte_gro_def with a custom command 00:01:47.628 [141/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:47.628 [142/740] Generating lib/rte_gro_mingw with a custom command 00:01:47.628 [143/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:47.628 [144/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.628 [145/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.628 [146/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:47.629 [147/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:47.629 [148/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.629 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:47.629 [150/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:47.629 [151/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:47.629 [152/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:47.629 [153/740] Generating lib/rte_gso_def with a custom command 00:01:47.629 [154/740] Generating lib/rte_gso_mingw with a custom command 00:01:47.629 [155/740] Linking static target lib/librte_cfgfile.a 00:01:47.629 [156/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:47.629 [157/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.629 [158/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:47.629 [159/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:47.629 [160/740] Generating lib/rte_ip_frag_def with a custom command 00:01:47.629 [161/740] Generating lib/rte_ip_frag_mingw with a custom command 00:01:47.629 [162/740] Generating lib/rte_jobstats_def with a custom command 00:01:47.629 [163/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:47.629 [164/740] Generating lib/rte_jobstats_mingw with a custom command 00:01:47.629 [165/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:47.629 [166/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:47.629 [167/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:47.629 [168/740] Generating lib/rte_latencystats_def with a custom command 00:01:47.629 [169/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.629 [170/740] Generating lib/rte_lpm_mingw with a custom command 00:01:47.629 [171/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:47.629 [172/740] Generating lib/rte_latencystats_mingw with a custom command 00:01:47.629 [173/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:47.629 [174/740] Generating lib/rte_lpm_def with a custom command 00:01:47.629 [175/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:47.629 [176/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:47.629 [177/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.629 [178/740] Linking static target lib/librte_net.a 00:01:47.900 [179/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:47.900 [180/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:47.900 [181/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.900 [182/740] Generating lib/rte_member_mingw with a custom command 00:01:47.900 [183/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:47.900 [184/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:47.900 [185/740] Generating lib/rte_member_def with a custom command 00:01:47.900 [186/740] Linking static target lib/librte_cmdline.a 00:01:47.900 [187/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:47.900 [188/740] Linking static target lib/librte_timer.a 00:01:47.900 [189/740] Generating lib/rte_pcapng_def with a custom command 00:01:47.900 [190/740] Generating lib/rte_pcapng_mingw with a custom command 00:01:47.900 [191/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:47.900 [192/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:47.900 [193/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:47.900 [194/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:47.900 [195/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:47.900 [196/740] Linking static target lib/librte_telemetry.a 00:01:47.900 [197/740] Linking static target lib/librte_metrics.a 00:01:47.900 [198/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:47.900 [199/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:47.900 [200/740] Linking static target lib/librte_jobstats.a 00:01:47.900 [201/740] Generating lib/rte_power_def with a custom command 00:01:47.900 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:47.900 [203/740] Generating lib/rte_power_mingw with a custom command 00:01:47.900 [204/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:47.900 [205/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:47.900 [206/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:47.900 [207/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:47.900 [208/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:47.900 [209/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:47.900 [210/740] Generating lib/rte_rawdev_def with a custom command 00:01:47.900 [211/740] Generating lib/rte_rawdev_mingw with a custom command 00:01:47.900 [212/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:47.900 [213/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:47.900 [214/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:47.900 [215/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:47.900 [216/740] Linking static target lib/librte_bitratestats.a 00:01:47.900 [217/740] Generating lib/rte_dmadev_def with a custom command 00:01:47.900 [218/740] Generating lib/rte_regexdev_mingw with a custom command 00:01:47.900 [219/740] Generating lib/rte_regexdev_def with a custom command 00:01:47.900 [220/740] Generating lib/rte_dmadev_mingw with a custom command 00:01:47.900 [221/740] Generating lib/rte_rib_mingw with a custom command 00:01:47.900 [222/740] Generating lib/rte_rib_def with a custom command 00:01:47.900 [223/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:47.900 [224/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:47.900 [225/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:47.900 [226/740] Generating lib/rte_reorder_def with a custom command 00:01:47.900 [227/740] Generating lib/rte_reorder_mingw with a custom command 00:01:48.167 [228/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:48.167 [229/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.167 [230/740] Generating lib/rte_sched_def with a custom command 00:01:48.167 [231/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:48.167 [232/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:48.167 [233/740] Generating lib/rte_security_mingw with a custom command 00:01:48.167 [234/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:48.167 [235/740] Generating lib/rte_security_def with a custom command 00:01:48.167 [236/740] Generating lib/rte_sched_mingw with a custom command 00:01:48.167 [237/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:48.167 [238/740] Generating lib/rte_stack_mingw with a custom command 00:01:48.167 [239/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.167 [240/740] Generating lib/rte_stack_def with a custom command 00:01:48.167 [241/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:48.167 [242/740] Linking static target lib/librte_compressdev.a 00:01:48.167 [243/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:48.167 [244/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:48.167 [245/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.167 [246/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.167 [247/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:48.167 [248/740] Linking static target lib/librte_mempool.a 00:01:48.167 [249/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:48.167 [250/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:48.167 [251/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:48.167 [252/740] Generating lib/rte_vhost_mingw with a custom command 00:01:48.167 [253/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:48.167 [254/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:48.167 [255/740] Generating lib/rte_vhost_def with a custom command 00:01:48.167 [256/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.167 [257/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:48.167 [258/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:48.167 [259/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.167 [260/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:48.167 [261/740] Generating lib/rte_ipsec_mingw with a custom command 00:01:48.167 [262/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:48.167 [263/740] Generating lib/rte_ipsec_def with a custom command 00:01:48.167 [264/740] Linking static target lib/librte_stack.a 00:01:48.167 [265/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:48.435 [266/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:48.435 [267/740] Generating lib/rte_fib_def with a custom command 00:01:48.435 [268/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:48.435 [269/740] Generating lib/rte_fib_mingw with a custom command 00:01:48.435 [270/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:48.435 [271/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.435 [272/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:48.435 [273/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:48.435 [274/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.435 [275/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:48.435 [276/740] Linking static target lib/librte_rcu.a 00:01:48.435 [277/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:48.435 [278/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:48.435 [279/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:48.435 [280/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:48.435 [281/740] Linking static target lib/librte_bbdev.a 00:01:48.435 [282/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.435 [283/740] Linking static target lib/librte_rawdev.a 00:01:48.435 [284/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:48.435 [285/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.435 [286/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:48.435 [287/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.435 [288/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:48.435 [289/740] Generating lib/rte_port_def with a custom command 00:01:48.436 [290/740] Generating lib/rte_port_mingw with a custom command 00:01:48.436 [291/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:48.436 [292/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:48.436 [293/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:48.436 [294/740] Generating lib/rte_pdump_def with a custom command 00:01:48.436 [295/740] Linking static target lib/librte_dmadev.a 00:01:48.436 [296/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.436 [297/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:48.436 [298/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:48.436 [299/740] Linking target lib/librte_telemetry.so.23.0 00:01:48.436 [300/740] Linking static target lib/librte_gro.a 00:01:48.436 [301/740] Generating lib/rte_pdump_mingw with a custom command 00:01:48.436 [302/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:48.436 [303/740] Linking static target lib/librte_latencystats.a 00:01:48.436 [304/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:48.702 [305/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.702 [306/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:48.702 [307/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:48.702 [308/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:48.702 [309/740] Linking static target lib/librte_gpudev.a 00:01:48.702 [310/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.702 [311/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:48.702 [312/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:48.702 [313/740] Linking static target lib/librte_gso.a 00:01:48.702 [314/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:48.702 [315/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:48.702 [316/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:48.702 [317/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:48.702 [318/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:48.702 [319/740] Linking static target lib/librte_distributor.a 00:01:48.702 [320/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.702 [321/740] Generating lib/rte_table_def with a custom command 00:01:48.702 [322/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:48.702 [323/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.702 [324/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:48.702 [325/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:48.702 [326/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:48.702 [327/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:48.702 [328/740] Linking static target lib/librte_ip_frag.a 00:01:48.702 [329/740] Generating lib/rte_table_mingw with a custom command 00:01:48.702 [330/740] Linking static target lib/librte_regexdev.a 00:01:48.969 [331/740] Linking static target lib/librte_eal.a 00:01:48.969 [332/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.969 [333/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:48.969 [334/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.969 [335/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:48.969 [336/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.969 [337/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.969 [338/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:48.969 [339/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:48.969 [340/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:48.969 [341/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:48.969 [342/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:48.969 [343/740] Linking static target lib/librte_pcapng.a 00:01:48.969 [344/740] Generating lib/rte_pipeline_def with a custom command 00:01:48.969 [345/740] Generating lib/rte_pipeline_mingw with a custom command 00:01:48.969 [346/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.970 [347/740] Generating lib/rte_graph_def with a custom command 00:01:48.970 [348/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:48.970 [349/740] Linking static target lib/librte_mbuf.a 00:01:48.970 [350/740] Linking static target lib/librte_power.a 00:01:48.970 [351/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:48.970 [352/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:48.970 [353/740] Linking static target lib/librte_security.a 00:01:48.970 [354/740] Generating lib/rte_graph_mingw with a custom command 00:01:48.970 [355/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.970 [356/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.970 [357/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.970 [358/740] Linking static target lib/librte_reorder.a 00:01:49.236 [359/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:49.236 [360/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:49.236 [361/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.236 [362/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:49.236 [363/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:49.236 [364/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:49.236 [365/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:49.236 [366/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.236 [367/740] Generating lib/rte_node_def with a custom command 00:01:49.236 [368/740] Generating lib/rte_node_mingw with a custom command 00:01:49.236 [369/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:49.236 [370/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:49.236 [371/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:49.236 [372/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.236 [373/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:49.236 [374/740] Linking static target lib/librte_bpf.a 00:01:49.236 [375/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:49.236 [376/740] Linking static target lib/librte_lpm.a 00:01:49.504 [377/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:49.504 [378/740] Generating drivers/rte_bus_pci_def with a custom command 00:01:49.504 [379/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.504 [380/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:49.504 [381/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:49.504 [382/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:49.504 [383/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:49.504 [384/740] Linking static target lib/librte_rib.a 00:01:49.504 [385/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:49.504 [386/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.504 [387/740] Generating drivers/rte_bus_vdev_def with a custom command 00:01:49.504 [388/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:49.504 [389/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:49.504 [390/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:49.504 [391/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:49.504 [392/740] Linking static target lib/librte_efd.a 00:01:49.504 [393/740] Generating drivers/rte_mempool_ring_def with a custom command 00:01:49.504 [394/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.504 [395/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:49.504 [396/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:49.504 [397/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.504 [398/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:49.504 [399/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:49.504 [400/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:49.504 [401/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.505 [402/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:49.505 [403/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.505 [404/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:49.505 [405/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:49.505 [406/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:49.505 [407/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:49.505 [408/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:49.770 [409/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:49.770 [410/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:49.770 [411/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:49.770 [412/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:49.770 [413/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:49.770 [414/740] Generating drivers/rte_net_i40e_def with a custom command 00:01:49.770 [415/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:49.770 [416/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.770 [417/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:49.770 [418/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:49.770 [419/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:49.770 [420/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:49.770 [421/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:49.770 [422/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:49.770 [423/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:49.770 [424/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:49.770 [425/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:49.770 [426/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:49.770 [427/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:49.770 [428/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:49.770 [429/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:49.770 [430/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:49.770 [431/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:49.770 [432/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:49.770 [433/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.770 [434/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:50.040 [435/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.040 [436/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:50.040 [437/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.040 [438/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:50.040 [439/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:50.040 [440/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.040 [441/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:50.040 [442/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:50.040 [443/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:50.040 [444/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.040 [445/740] Linking static target lib/librte_fib.a 00:01:50.040 [446/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:50.040 [447/740] Linking static target lib/librte_graph.a 00:01:50.040 [448/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:50.040 [449/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:50.040 [450/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:50.040 [451/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.040 [452/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.040 [453/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:50.040 [454/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:50.312 [455/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:50.312 [456/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:50.312 [457/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:50.312 [458/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.312 [459/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:50.312 [460/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:50.312 [461/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:50.312 [462/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:50.312 [463/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:50.312 [464/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:50.312 [465/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:50.312 [466/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:50.584 [467/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:50.584 [468/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:50.584 [469/740] Linking static target lib/librte_pdump.a 00:01:50.584 [470/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:50.584 [471/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:50.584 [472/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.584 [473/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:50.584 [474/740] Linking static target drivers/librte_bus_vdev.a 00:01:50.584 [475/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:50.584 [476/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.584 [477/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:50.584 [478/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:50.584 [479/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:50.584 [480/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:50.584 [481/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.584 [482/740] Linking static target lib/librte_table.a 00:01:50.584 [483/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:50.584 [484/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.852 [485/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.852 [486/740] Linking static target drivers/librte_bus_pci.a 00:01:50.852 [487/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.852 [488/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:50.852 [489/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:50.852 [490/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:50.852 [491/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:50.852 [492/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:50.852 [493/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:50.852 [494/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:50.852 [495/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:50.852 [496/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:50.852 [497/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:50.852 [498/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:50.852 [499/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:50.852 [500/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:51.123 [501/740] Linking static target lib/librte_ethdev.a 00:01:51.123 [502/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:51.123 [503/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:51.123 [504/740] Linking static target lib/librte_cryptodev.a 00:01:51.123 [505/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.123 [506/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:51.123 [507/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:51.123 [508/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:51.123 [509/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:51.123 [510/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:51.123 [511/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:51.123 [512/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:51.123 [513/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:51.123 [514/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.123 [515/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:51.386 [516/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:51.386 [517/740] Linking static target lib/librte_node.a 00:01:51.386 [518/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:51.386 [519/740] Linking static target lib/librte_sched.a 00:01:51.386 [520/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:51.387 [521/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:51.387 [522/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:51.387 [523/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:51.387 [524/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:51.387 [525/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:51.387 [526/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:51.387 [527/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:51.387 [528/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:51.387 [529/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:51.387 [530/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.387 [531/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:51.387 [532/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:51.387 [533/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:51.387 [534/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:51.387 [535/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:51.387 [536/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:51.646 [537/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:51.646 [538/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:51.646 [539/740] Linking static target lib/librte_ipsec.a 00:01:51.646 [540/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:51.646 [541/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:51.646 [542/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:51.646 [543/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.646 [544/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:51.646 [545/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:51.646 [546/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:51.646 [547/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.646 [548/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.646 [549/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:51.646 [550/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:51.646 [551/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:51.646 [552/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:51.646 [553/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:51.903 [554/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:51.903 [555/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:51.903 [556/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:51.903 [557/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.903 [558/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.903 [559/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.903 [560/740] Linking static target drivers/librte_mempool_ring.a 00:01:51.903 [561/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:51.903 [562/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:51.903 [563/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:51.903 [564/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:51.903 [565/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:51.903 [566/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:51.903 [567/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:51.903 [568/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:51.903 [569/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:51.903 [570/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:51.903 [571/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:51.903 [572/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:51.903 [573/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:51.903 [574/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:51.903 [575/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.903 [576/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:51.903 [577/740] Linking static target lib/librte_member.a 00:01:51.903 [578/740] Linking static target lib/librte_eventdev.a 00:01:51.903 [579/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:52.161 [580/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:52.161 [581/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:52.161 [582/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:52.161 [583/740] Linking static target lib/librte_port.a 00:01:52.161 [584/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:52.161 [585/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:52.161 [586/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:52.161 [587/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:52.161 [588/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:52.161 [589/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:52.161 [590/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:52.161 [591/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:52.161 [592/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:52.161 [593/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:52.161 [594/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:52.161 [595/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:01:52.419 [596/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:52.419 [597/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.420 [598/740] Linking static target lib/librte_hash.a 00:01:52.420 [599/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:52.420 [600/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.420 [601/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:52.420 [602/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:52.678 [603/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:52.678 [604/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:52.678 [605/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:52.678 [606/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:52.678 [607/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:52.678 [608/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:01:52.678 [609/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:52.678 [610/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:52.678 [611/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.937 [612/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:52.937 [613/740] Linking static target lib/librte_acl.a 00:01:53.196 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:53.196 [615/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.196 [616/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.454 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:53.454 [618/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:53.713 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:53.971 [620/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:53.971 [621/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.230 [622/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:54.489 [623/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:54.489 [624/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:54.748 [625/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.007 [626/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:55.007 [627/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:55.007 [628/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:55.007 [629/740] Linking static target drivers/librte_net_i40e.a 00:01:55.265 [630/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:55.523 [631/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:55.782 [632/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.042 [633/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:58.579 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.485 [635/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.485 [636/740] Linking target lib/librte_eal.so.23.0 00:02:00.744 [637/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:00.744 [638/740] Linking target lib/librte_jobstats.so.23.0 00:02:00.744 [639/740] Linking target lib/librte_meter.so.23.0 00:02:00.744 [640/740] Linking target lib/librte_ring.so.23.0 00:02:00.744 [641/740] Linking target lib/librte_pci.so.23.0 00:02:00.744 [642/740] Linking target lib/librte_timer.so.23.0 00:02:00.744 [643/740] Linking target lib/librte_cfgfile.so.23.0 00:02:00.744 [644/740] Linking target lib/librte_stack.so.23.0 00:02:00.744 [645/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:00.744 [646/740] Linking target lib/librte_rawdev.so.23.0 00:02:00.744 [647/740] Linking target lib/librte_dmadev.so.23.0 00:02:00.744 [648/740] Linking target lib/librte_graph.so.23.0 00:02:00.744 [649/740] Linking target lib/librte_acl.so.23.0 00:02:01.002 [650/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:01.002 [651/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:01.002 [652/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:01.002 [653/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:01.002 [654/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:01.002 [655/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:01.003 [656/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:01.003 [657/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:01.003 [658/740] Linking target lib/librte_rcu.so.23.0 00:02:01.003 [659/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:01.003 [660/740] Linking target lib/librte_mempool.so.23.0 00:02:01.003 [661/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:01.003 [662/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:01.003 [663/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:01.261 [664/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:01.261 [665/740] Linking target lib/librte_rib.so.23.0 00:02:01.261 [666/740] Linking target lib/librte_mbuf.so.23.0 00:02:01.261 [667/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:01.261 [668/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:01.261 [669/740] Linking target lib/librte_bbdev.so.23.0 00:02:01.261 [670/740] Linking target lib/librte_reorder.so.23.0 00:02:01.261 [671/740] Linking target lib/librte_fib.so.23.0 00:02:01.261 [672/740] Linking target lib/librte_compressdev.so.23.0 00:02:01.261 [673/740] Linking target lib/librte_net.so.23.0 00:02:01.261 [674/740] Linking target lib/librte_gpudev.so.23.0 00:02:01.261 [675/740] Linking target lib/librte_regexdev.so.23.0 00:02:01.261 [676/740] Linking target lib/librte_distributor.so.23.0 00:02:01.261 [677/740] Linking target lib/librte_sched.so.23.0 00:02:01.261 [678/740] Linking target lib/librte_cryptodev.so.23.0 00:02:01.521 [679/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:01.521 [680/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:01.521 [681/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:01.521 [682/740] Linking target lib/librte_security.so.23.0 00:02:01.521 [683/740] Linking target lib/librte_cmdline.so.23.0 00:02:01.521 [684/740] Linking target lib/librte_hash.so.23.0 00:02:01.521 [685/740] Linking target lib/librte_ethdev.so.23.0 00:02:01.521 [686/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:01.521 [687/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:01.780 [688/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:01.780 [689/740] Linking target lib/librte_efd.so.23.0 00:02:01.780 [690/740] Linking target lib/librte_lpm.so.23.0 00:02:01.780 [691/740] Linking target lib/librte_member.so.23.0 00:02:01.780 [692/740] Linking target lib/librte_ipsec.so.23.0 00:02:01.780 [693/740] Linking target lib/librte_ip_frag.so.23.0 00:02:01.780 [694/740] Linking target lib/librte_metrics.so.23.0 00:02:01.780 [695/740] Linking target lib/librte_pcapng.so.23.0 00:02:01.780 [696/740] Linking target lib/librte_gro.so.23.0 00:02:01.780 [697/740] Linking target lib/librte_gso.so.23.0 00:02:01.780 [698/740] Linking target lib/librte_power.so.23.0 00:02:01.780 [699/740] Linking target lib/librte_eventdev.so.23.0 00:02:01.780 [700/740] Linking target lib/librte_bpf.so.23.0 00:02:01.780 [701/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:01.780 [702/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:01.780 [703/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:01.780 [704/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:01.780 [705/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:01.780 [706/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:01.780 [707/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:01.780 [708/740] Linking target lib/librte_node.so.23.0 00:02:01.780 [709/740] Linking target lib/librte_latencystats.so.23.0 00:02:01.780 [710/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:01.780 [711/740] Linking target lib/librte_bitratestats.so.23.0 00:02:02.039 [712/740] Linking target lib/librte_pdump.so.23.0 00:02:02.039 [713/740] Linking static target lib/librte_vhost.a 00:02:02.039 [714/740] Linking target lib/librte_port.so.23.0 00:02:02.039 [715/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:02.039 [716/740] Linking target lib/librte_table.so.23.0 00:02:02.298 [717/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:02.558 [718/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:02.558 [719/740] Linking static target lib/librte_pipeline.a 00:02:02.816 [720/740] Linking target app/dpdk-dumpcap 00:02:02.816 [721/740] Linking target app/dpdk-pdump 00:02:02.816 [722/740] Linking target app/dpdk-test-regex 00:02:02.816 [723/740] Linking target app/dpdk-test-acl 00:02:02.816 [724/740] Linking target app/dpdk-proc-info 00:02:02.816 [725/740] Linking target app/dpdk-test-cmdline 00:02:02.816 [726/740] Linking target app/dpdk-test-crypto-perf 00:02:02.816 [727/740] Linking target app/dpdk-test-eventdev 00:02:03.076 [728/740] Linking target app/dpdk-test-gpudev 00:02:03.076 [729/740] Linking target app/dpdk-test-fib 00:02:03.076 [730/740] Linking target app/dpdk-test-sad 00:02:03.076 [731/740] Linking target app/dpdk-test-flow-perf 00:02:03.076 [732/740] Linking target app/dpdk-testpmd 00:02:03.076 [733/740] Linking target app/dpdk-test-pipeline 00:02:03.076 [734/740] Linking target app/dpdk-test-compress-perf 00:02:03.076 [735/740] Linking target app/dpdk-test-security-perf 00:02:03.076 [736/740] Linking target app/dpdk-test-bbdev 00:02:03.646 [737/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.906 [738/740] Linking target lib/librte_vhost.so.23.0 00:02:07.200 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.200 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:07.200 05:18:06 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:07.200 05:18:06 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:07.200 05:18:06 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:07.200 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:07.200 [0/1] Installing files. 00:02:07.464 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:07.466 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:07.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:07.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:07.470 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.470 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.730 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.730 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.730 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.730 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.730 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:07.730 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.730 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:07.730 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.730 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:07.730 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.730 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:07.730 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:07.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:07.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:07.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:07.734 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:07.734 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:07.734 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:07.734 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:07.734 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:07.734 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:07.734 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:07.734 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:07.734 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:07.734 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:07.734 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:07.734 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:07.734 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:07.734 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:07.734 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:07.734 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:07.734 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:07.734 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:07.734 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:07.734 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:07.734 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:07.734 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:07.734 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:07.734 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:07.734 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:07.734 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:07.734 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:07.734 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:07.734 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:07.734 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:07.734 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:07.734 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:07.734 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:07.734 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:07.734 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:07.734 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:07.734 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:07.734 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:07.734 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:07.734 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:07.734 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:07.734 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:07.735 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:07.735 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:07.735 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:07.735 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:07.735 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:07.735 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:07.735 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:07.735 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:07.735 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:07.735 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:07.735 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:07.735 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:07.735 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:07.735 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:07.735 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:07.735 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:07.735 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:07.735 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:07.735 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:07.735 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:07.735 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:07.735 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:07.735 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:07.735 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:07.735 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:07.735 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:07.735 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:07.735 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:07.735 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:07.735 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:07.735 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:07.735 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:07.735 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:07.735 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:07.735 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:07.735 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:07.735 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:07.735 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:07.735 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:07.735 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:07.735 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:07.735 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:07.735 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:07.735 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:07.735 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:07.735 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:07.735 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:07.735 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:07.735 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:07.735 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:07.735 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:07.735 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:07.735 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:07.735 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:07.735 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:07.735 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:07.735 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:07.735 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:07.735 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:07.735 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:07.735 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:07.735 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:07.735 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:07.735 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:07.735 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:07.735 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:07.735 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:07.735 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:07.735 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:07.735 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:07.735 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:07.735 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:07.735 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:07.735 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:07.735 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:07.735 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:07.735 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:07.735 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:07.735 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:07.735 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:07.736 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:07.736 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:07.736 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:07.736 05:18:07 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:07.736 05:18:07 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.736 00:02:07.736 real 0m28.440s 00:02:07.736 user 7m45.999s 00:02:07.736 sys 1m56.132s 00:02:07.736 05:18:07 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:07.736 05:18:07 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:07.736 ************************************ 00:02:07.736 END TEST build_native_dpdk 00:02:07.736 ************************************ 00:02:07.736 05:18:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:07.736 05:18:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:07.736 05:18:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:07.736 05:18:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:07.736 05:18:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:07.736 05:18:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:07.736 05:18:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:07.736 05:18:07 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:07.994 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:07.994 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.994 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.254 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:08.512 Using 'verbs' RDMA provider 00:02:21.662 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:33.878 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:34.138 Creating mk/config.mk...done. 00:02:34.138 Creating mk/cc.flags.mk...done. 00:02:34.138 Type 'make' to build. 00:02:34.138 05:18:33 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:34.138 05:18:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:34.138 05:18:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:34.138 05:18:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.138 ************************************ 00:02:34.138 START TEST make 00:02:34.138 ************************************ 00:02:34.138 05:18:33 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:36.053 The Meson build system 00:02:36.053 Version: 1.5.0 00:02:36.053 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:36.053 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:36.053 Build type: native build 00:02:36.053 Project name: libvfio-user 00:02:36.053 Project version: 0.0.1 00:02:36.053 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:36.053 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:36.053 Host machine cpu family: x86_64 00:02:36.053 Host machine cpu: x86_64 00:02:36.053 Run-time dependency threads found: YES 00:02:36.053 Library dl found: YES 00:02:36.053 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:36.053 Run-time dependency json-c found: YES 0.17 00:02:36.053 Run-time dependency cmocka found: YES 1.1.7 00:02:36.053 Program pytest-3 found: NO 00:02:36.053 Program flake8 found: NO 00:02:36.053 Program misspell-fixer found: NO 00:02:36.053 Program restructuredtext-lint found: NO 00:02:36.053 Program valgrind found: YES (/usr/bin/valgrind) 00:02:36.053 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.053 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.053 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.053 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.053 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:36.053 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:36.053 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.053 Build targets in project: 8 00:02:36.053 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:36.053 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:36.053 00:02:36.053 libvfio-user 0.0.1 00:02:36.053 00:02:36.053 User defined options 00:02:36.053 buildtype : debug 00:02:36.053 default_library: shared 00:02:36.053 libdir : /usr/local/lib 00:02:36.053 00:02:36.053 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.620 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:36.879 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:36.879 [2/37] Compiling C object samples/null.p/null.c.o 00:02:36.879 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:36.879 [4/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:36.879 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:36.879 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:36.879 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:36.879 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:36.879 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:36.879 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:36.879 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:36.879 [12/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:36.879 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:36.879 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:36.879 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:36.879 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:36.879 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:36.879 [18/37] Compiling C object samples/server.p/server.c.o 00:02:36.879 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:36.879 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:36.879 [21/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:36.879 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:36.879 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:36.879 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:36.879 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:36.879 [26/37] Compiling C object samples/client.p/client.c.o 00:02:36.879 [27/37] Linking target samples/client 00:02:36.879 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:37.137 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:37.137 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:37.137 [31/37] Linking target test/unit_tests 00:02:37.137 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:37.137 [33/37] Linking target samples/server 00:02:37.137 [34/37] Linking target samples/gpio-pci-idio-16 00:02:37.137 [35/37] Linking target samples/lspci 00:02:37.137 [36/37] Linking target samples/null 00:02:37.137 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:37.137 INFO: autodetecting backend as ninja 00:02:37.137 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:37.137 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:37.704 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:37.704 ninja: no work to do. 00:03:04.348 CC lib/log/log.o 00:03:04.348 CC lib/log/log_flags.o 00:03:04.348 CC lib/log/log_deprecated.o 00:03:04.348 CC lib/ut/ut.o 00:03:04.348 CC lib/ut_mock/mock.o 00:03:04.607 LIB libspdk_ut.a 00:03:04.607 LIB libspdk_log.a 00:03:04.607 LIB libspdk_ut_mock.a 00:03:04.607 SO libspdk_ut.so.2.0 00:03:04.607 SO libspdk_log.so.7.1 00:03:04.607 SO libspdk_ut_mock.so.6.0 00:03:04.607 SYMLINK libspdk_ut.so 00:03:04.607 SYMLINK libspdk_log.so 00:03:04.607 SYMLINK libspdk_ut_mock.so 00:03:04.865 CC lib/dma/dma.o 00:03:04.865 CC lib/ioat/ioat.o 00:03:04.865 CXX lib/trace_parser/trace.o 00:03:04.865 CC lib/util/base64.o 00:03:04.865 CC lib/util/bit_array.o 00:03:04.865 CC lib/util/cpuset.o 00:03:04.865 CC lib/util/crc16.o 00:03:04.865 CC lib/util/crc32.o 00:03:04.865 CC lib/util/crc32c.o 00:03:04.865 CC lib/util/crc32_ieee.o 00:03:04.865 CC lib/util/crc64.o 00:03:04.865 CC lib/util/dif.o 00:03:04.865 CC lib/util/fd.o 00:03:04.865 CC lib/util/fd_group.o 00:03:04.865 CC lib/util/file.o 00:03:04.865 CC lib/util/hexlify.o 00:03:04.865 CC lib/util/iov.o 00:03:04.865 CC lib/util/math.o 00:03:04.865 CC lib/util/net.o 00:03:04.865 CC lib/util/pipe.o 00:03:04.865 CC lib/util/strerror_tls.o 00:03:04.865 CC lib/util/string.o 00:03:04.865 CC lib/util/uuid.o 00:03:04.865 CC lib/util/xor.o 00:03:04.865 CC lib/util/zipf.o 00:03:04.865 CC lib/util/md5.o 00:03:05.123 CC lib/vfio_user/host/vfio_user_pci.o 00:03:05.123 CC lib/vfio_user/host/vfio_user.o 00:03:05.123 LIB libspdk_dma.a 00:03:05.123 SO libspdk_dma.so.5.0 00:03:05.123 LIB libspdk_ioat.a 00:03:05.123 SO libspdk_ioat.so.7.0 00:03:05.381 SYMLINK libspdk_dma.so 00:03:05.381 SYMLINK libspdk_ioat.so 00:03:05.381 LIB libspdk_vfio_user.a 00:03:05.381 SO libspdk_vfio_user.so.5.0 00:03:05.381 SYMLINK libspdk_vfio_user.so 00:03:05.381 LIB libspdk_util.a 00:03:05.381 SO libspdk_util.so.10.1 00:03:05.640 SYMLINK libspdk_util.so 00:03:05.898 CC lib/json/json_parse.o 00:03:05.898 CC lib/json/json_util.o 00:03:05.898 CC lib/json/json_write.o 00:03:05.898 CC lib/vmd/vmd.o 00:03:05.898 CC lib/rdma_utils/rdma_utils.o 00:03:05.898 CC lib/vmd/led.o 00:03:05.898 CC lib/conf/conf.o 00:03:05.898 CC lib/idxd/idxd.o 00:03:05.898 CC lib/env_dpdk/env.o 00:03:05.898 CC lib/idxd/idxd_user.o 00:03:05.898 CC lib/env_dpdk/memory.o 00:03:05.898 CC lib/idxd/idxd_kernel.o 00:03:05.898 CC lib/env_dpdk/pci.o 00:03:05.898 CC lib/env_dpdk/init.o 00:03:05.898 CC lib/env_dpdk/threads.o 00:03:05.898 CC lib/env_dpdk/pci_ioat.o 00:03:05.898 CC lib/env_dpdk/pci_virtio.o 00:03:05.898 CC lib/env_dpdk/pci_vmd.o 00:03:05.898 CC lib/env_dpdk/pci_idxd.o 00:03:05.898 CC lib/env_dpdk/pci_event.o 00:03:05.898 CC lib/env_dpdk/sigbus_handler.o 00:03:05.898 CC lib/env_dpdk/pci_dpdk.o 00:03:05.898 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:05.898 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:06.158 LIB libspdk_conf.a 00:03:06.158 SO libspdk_conf.so.6.0 00:03:06.158 LIB libspdk_json.a 00:03:06.158 LIB libspdk_rdma_utils.a 00:03:06.158 SO libspdk_rdma_utils.so.1.0 00:03:06.417 SO libspdk_json.so.6.0 00:03:06.417 SYMLINK libspdk_conf.so 00:03:06.417 SYMLINK libspdk_json.so 00:03:06.417 SYMLINK libspdk_rdma_utils.so 00:03:06.417 LIB libspdk_idxd.a 00:03:06.417 LIB libspdk_vmd.a 00:03:06.417 SO libspdk_idxd.so.12.1 00:03:06.417 SO libspdk_vmd.so.6.0 00:03:06.676 SYMLINK libspdk_idxd.so 00:03:06.676 LIB libspdk_trace_parser.a 00:03:06.676 SYMLINK libspdk_vmd.so 00:03:06.676 SO libspdk_trace_parser.so.6.0 00:03:06.676 CC lib/jsonrpc/jsonrpc_server.o 00:03:06.676 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:06.676 CC lib/jsonrpc/jsonrpc_client.o 00:03:06.676 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:06.676 SYMLINK libspdk_trace_parser.so 00:03:06.676 CC lib/rdma_provider/common.o 00:03:06.676 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:06.936 LIB libspdk_rdma_provider.a 00:03:06.936 LIB libspdk_jsonrpc.a 00:03:06.936 SO libspdk_rdma_provider.so.7.0 00:03:06.936 SO libspdk_jsonrpc.so.6.0 00:03:06.936 SYMLINK libspdk_rdma_provider.so 00:03:06.936 SYMLINK libspdk_jsonrpc.so 00:03:06.936 LIB libspdk_env_dpdk.a 00:03:06.936 SO libspdk_env_dpdk.so.15.1 00:03:07.196 SYMLINK libspdk_env_dpdk.so 00:03:07.455 CC lib/rpc/rpc.o 00:03:07.455 LIB libspdk_rpc.a 00:03:07.455 SO libspdk_rpc.so.6.0 00:03:07.715 SYMLINK libspdk_rpc.so 00:03:07.974 CC lib/notify/notify.o 00:03:07.974 CC lib/notify/notify_rpc.o 00:03:07.974 CC lib/trace/trace.o 00:03:07.974 CC lib/trace/trace_flags.o 00:03:07.974 CC lib/trace/trace_rpc.o 00:03:07.974 CC lib/keyring/keyring.o 00:03:07.974 CC lib/keyring/keyring_rpc.o 00:03:08.234 LIB libspdk_notify.a 00:03:08.234 SO libspdk_notify.so.6.0 00:03:08.234 LIB libspdk_trace.a 00:03:08.234 LIB libspdk_keyring.a 00:03:08.234 SYMLINK libspdk_notify.so 00:03:08.234 SO libspdk_trace.so.11.0 00:03:08.234 SO libspdk_keyring.so.2.0 00:03:08.234 SYMLINK libspdk_trace.so 00:03:08.234 SYMLINK libspdk_keyring.so 00:03:08.802 CC lib/thread/thread.o 00:03:08.802 CC lib/thread/iobuf.o 00:03:08.802 CC lib/sock/sock.o 00:03:08.802 CC lib/sock/sock_rpc.o 00:03:09.060 LIB libspdk_sock.a 00:03:09.060 SO libspdk_sock.so.10.0 00:03:09.060 SYMLINK libspdk_sock.so 00:03:09.319 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:09.319 CC lib/nvme/nvme_ctrlr.o 00:03:09.319 CC lib/nvme/nvme_fabric.o 00:03:09.319 CC lib/nvme/nvme_ns_cmd.o 00:03:09.319 CC lib/nvme/nvme_ns.o 00:03:09.319 CC lib/nvme/nvme_pcie_common.o 00:03:09.319 CC lib/nvme/nvme_pcie.o 00:03:09.319 CC lib/nvme/nvme_qpair.o 00:03:09.319 CC lib/nvme/nvme.o 00:03:09.319 CC lib/nvme/nvme_quirks.o 00:03:09.319 CC lib/nvme/nvme_transport.o 00:03:09.319 CC lib/nvme/nvme_discovery.o 00:03:09.319 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:09.319 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:09.319 CC lib/nvme/nvme_tcp.o 00:03:09.319 CC lib/nvme/nvme_opal.o 00:03:09.319 CC lib/nvme/nvme_io_msg.o 00:03:09.319 CC lib/nvme/nvme_poll_group.o 00:03:09.319 CC lib/nvme/nvme_zns.o 00:03:09.319 CC lib/nvme/nvme_stubs.o 00:03:09.319 CC lib/nvme/nvme_auth.o 00:03:09.319 CC lib/nvme/nvme_cuse.o 00:03:09.319 CC lib/nvme/nvme_vfio_user.o 00:03:09.319 CC lib/nvme/nvme_rdma.o 00:03:09.885 LIB libspdk_thread.a 00:03:09.885 SO libspdk_thread.so.11.0 00:03:09.885 SYMLINK libspdk_thread.so 00:03:10.143 CC lib/fsdev/fsdev.o 00:03:10.143 CC lib/fsdev/fsdev_io.o 00:03:10.143 CC lib/fsdev/fsdev_rpc.o 00:03:10.143 CC lib/virtio/virtio.o 00:03:10.143 CC lib/virtio/virtio_vhost_user.o 00:03:10.143 CC lib/virtio/virtio_vfio_user.o 00:03:10.143 CC lib/virtio/virtio_pci.o 00:03:10.143 CC lib/init/subsystem_rpc.o 00:03:10.143 CC lib/init/json_config.o 00:03:10.143 CC lib/init/subsystem.o 00:03:10.143 CC lib/blob/blobstore.o 00:03:10.143 CC lib/init/rpc.o 00:03:10.143 CC lib/blob/request.o 00:03:10.143 CC lib/blob/blob_bs_dev.o 00:03:10.143 CC lib/blob/zeroes.o 00:03:10.143 CC lib/accel/accel.o 00:03:10.143 CC lib/vfu_tgt/tgt_endpoint.o 00:03:10.143 CC lib/accel/accel_rpc.o 00:03:10.143 CC lib/accel/accel_sw.o 00:03:10.143 CC lib/vfu_tgt/tgt_rpc.o 00:03:10.401 LIB libspdk_init.a 00:03:10.401 SO libspdk_init.so.6.0 00:03:10.401 LIB libspdk_virtio.a 00:03:10.401 LIB libspdk_vfu_tgt.a 00:03:10.401 SO libspdk_virtio.so.7.0 00:03:10.401 SO libspdk_vfu_tgt.so.3.0 00:03:10.401 SYMLINK libspdk_init.so 00:03:10.658 SYMLINK libspdk_virtio.so 00:03:10.658 SYMLINK libspdk_vfu_tgt.so 00:03:10.658 LIB libspdk_fsdev.a 00:03:10.658 SO libspdk_fsdev.so.2.0 00:03:10.916 SYMLINK libspdk_fsdev.so 00:03:10.916 CC lib/event/app.o 00:03:10.916 CC lib/event/reactor.o 00:03:10.916 CC lib/event/log_rpc.o 00:03:10.916 CC lib/event/app_rpc.o 00:03:10.916 CC lib/event/scheduler_static.o 00:03:10.916 LIB libspdk_accel.a 00:03:11.173 SO libspdk_accel.so.16.0 00:03:11.173 SYMLINK libspdk_accel.so 00:03:11.173 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:11.173 LIB libspdk_nvme.a 00:03:11.173 LIB libspdk_event.a 00:03:11.173 SO libspdk_event.so.14.0 00:03:11.173 SO libspdk_nvme.so.15.0 00:03:11.173 SYMLINK libspdk_event.so 00:03:11.431 SYMLINK libspdk_nvme.so 00:03:11.431 CC lib/bdev/bdev.o 00:03:11.431 CC lib/bdev/bdev_rpc.o 00:03:11.431 CC lib/bdev/bdev_zone.o 00:03:11.431 CC lib/bdev/part.o 00:03:11.431 CC lib/bdev/scsi_nvme.o 00:03:11.689 LIB libspdk_fuse_dispatcher.a 00:03:11.689 SO libspdk_fuse_dispatcher.so.1.0 00:03:11.689 SYMLINK libspdk_fuse_dispatcher.so 00:03:12.257 LIB libspdk_blob.a 00:03:12.257 SO libspdk_blob.so.12.0 00:03:12.517 SYMLINK libspdk_blob.so 00:03:12.777 CC lib/lvol/lvol.o 00:03:12.777 CC lib/blobfs/blobfs.o 00:03:12.777 CC lib/blobfs/tree.o 00:03:13.347 LIB libspdk_bdev.a 00:03:13.347 SO libspdk_bdev.so.17.0 00:03:13.347 LIB libspdk_blobfs.a 00:03:13.347 SO libspdk_blobfs.so.11.0 00:03:13.347 LIB libspdk_lvol.a 00:03:13.347 SYMLINK libspdk_bdev.so 00:03:13.607 SO libspdk_lvol.so.11.0 00:03:13.607 SYMLINK libspdk_blobfs.so 00:03:13.607 SYMLINK libspdk_lvol.so 00:03:13.867 CC lib/nbd/nbd.o 00:03:13.867 CC lib/nbd/nbd_rpc.o 00:03:13.867 CC lib/nvmf/ctrlr.o 00:03:13.867 CC lib/nvmf/ctrlr_discovery.o 00:03:13.867 CC lib/nvmf/ctrlr_bdev.o 00:03:13.867 CC lib/nvmf/subsystem.o 00:03:13.867 CC lib/scsi/dev.o 00:03:13.867 CC lib/scsi/lun.o 00:03:13.867 CC lib/nvmf/nvmf.o 00:03:13.867 CC lib/scsi/port.o 00:03:13.867 CC lib/nvmf/nvmf_rpc.o 00:03:13.867 CC lib/nvmf/transport.o 00:03:13.867 CC lib/scsi/scsi.o 00:03:13.867 CC lib/scsi/scsi_bdev.o 00:03:13.867 CC lib/nvmf/tcp.o 00:03:13.867 CC lib/nvmf/stubs.o 00:03:13.867 CC lib/scsi/scsi_pr.o 00:03:13.867 CC lib/nvmf/mdns_server.o 00:03:13.867 CC lib/scsi/scsi_rpc.o 00:03:13.867 CC lib/nvmf/vfio_user.o 00:03:13.867 CC lib/scsi/task.o 00:03:13.867 CC lib/ublk/ublk.o 00:03:13.867 CC lib/nvmf/rdma.o 00:03:13.867 CC lib/ftl/ftl_core.o 00:03:13.867 CC lib/ftl/ftl_init.o 00:03:13.867 CC lib/ublk/ublk_rpc.o 00:03:13.867 CC lib/nvmf/auth.o 00:03:13.867 CC lib/ftl/ftl_layout.o 00:03:13.867 CC lib/ftl/ftl_io.o 00:03:13.867 CC lib/ftl/ftl_debug.o 00:03:13.867 CC lib/ftl/ftl_sb.o 00:03:13.867 CC lib/ftl/ftl_l2p.o 00:03:13.867 CC lib/ftl/ftl_l2p_flat.o 00:03:13.867 CC lib/ftl/ftl_nv_cache.o 00:03:13.867 CC lib/ftl/ftl_band.o 00:03:13.867 CC lib/ftl/ftl_band_ops.o 00:03:13.867 CC lib/ftl/ftl_writer.o 00:03:13.867 CC lib/ftl/ftl_reloc.o 00:03:13.867 CC lib/ftl/ftl_rq.o 00:03:13.867 CC lib/ftl/ftl_l2p_cache.o 00:03:13.867 CC lib/ftl/ftl_p2l.o 00:03:13.867 CC lib/ftl/ftl_p2l_log.o 00:03:13.867 CC lib/ftl/mngt/ftl_mngt.o 00:03:13.867 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:13.867 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:13.867 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:13.867 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:13.867 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:13.867 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:13.867 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:13.867 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:13.867 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:13.867 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:13.867 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:13.867 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:13.867 CC lib/ftl/utils/ftl_md.o 00:03:13.867 CC lib/ftl/utils/ftl_conf.o 00:03:13.867 CC lib/ftl/utils/ftl_mempool.o 00:03:13.867 CC lib/ftl/utils/ftl_bitmap.o 00:03:13.867 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:13.867 CC lib/ftl/utils/ftl_property.o 00:03:13.867 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:13.867 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:13.867 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:13.867 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:13.867 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:13.867 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:13.867 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:13.867 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:13.867 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:13.867 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:13.867 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:13.867 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:13.867 CC lib/ftl/base/ftl_base_dev.o 00:03:13.867 CC lib/ftl/base/ftl_base_bdev.o 00:03:13.867 CC lib/ftl/ftl_trace.o 00:03:14.438 LIB libspdk_nbd.a 00:03:14.438 SO libspdk_nbd.so.7.0 00:03:14.438 LIB libspdk_scsi.a 00:03:14.438 SYMLINK libspdk_nbd.so 00:03:14.438 LIB libspdk_ublk.a 00:03:14.438 SO libspdk_scsi.so.9.0 00:03:14.695 SO libspdk_ublk.so.3.0 00:03:14.695 SYMLINK libspdk_scsi.so 00:03:14.695 SYMLINK libspdk_ublk.so 00:03:14.953 CC lib/vhost/vhost.o 00:03:14.953 CC lib/vhost/vhost_scsi.o 00:03:14.953 CC lib/vhost/vhost_blk.o 00:03:14.953 CC lib/vhost/vhost_rpc.o 00:03:14.953 CC lib/vhost/rte_vhost_user.o 00:03:14.953 LIB libspdk_ftl.a 00:03:14.953 CC lib/iscsi/conn.o 00:03:14.953 CC lib/iscsi/init_grp.o 00:03:14.953 CC lib/iscsi/iscsi.o 00:03:14.953 CC lib/iscsi/param.o 00:03:14.953 CC lib/iscsi/portal_grp.o 00:03:14.953 CC lib/iscsi/tgt_node.o 00:03:14.953 CC lib/iscsi/iscsi_subsystem.o 00:03:14.953 CC lib/iscsi/iscsi_rpc.o 00:03:14.953 CC lib/iscsi/task.o 00:03:15.212 SO libspdk_ftl.so.9.0 00:03:15.212 SYMLINK libspdk_ftl.so 00:03:15.780 LIB libspdk_nvmf.a 00:03:15.780 SO libspdk_nvmf.so.20.0 00:03:15.780 LIB libspdk_vhost.a 00:03:15.780 SO libspdk_vhost.so.8.0 00:03:15.780 SYMLINK libspdk_nvmf.so 00:03:15.780 SYMLINK libspdk_vhost.so 00:03:16.040 LIB libspdk_iscsi.a 00:03:16.040 SO libspdk_iscsi.so.8.0 00:03:16.040 SYMLINK libspdk_iscsi.so 00:03:16.608 CC module/vfu_device/vfu_virtio.o 00:03:16.608 CC module/vfu_device/vfu_virtio_scsi.o 00:03:16.608 CC module/vfu_device/vfu_virtio_blk.o 00:03:16.608 CC module/vfu_device/vfu_virtio_rpc.o 00:03:16.608 CC module/vfu_device/vfu_virtio_fs.o 00:03:16.608 CC module/env_dpdk/env_dpdk_rpc.o 00:03:16.867 LIB libspdk_env_dpdk_rpc.a 00:03:16.867 CC module/keyring/file/keyring.o 00:03:16.867 CC module/keyring/file/keyring_rpc.o 00:03:16.867 CC module/accel/ioat/accel_ioat.o 00:03:16.867 CC module/accel/ioat/accel_ioat_rpc.o 00:03:16.867 CC module/keyring/linux/keyring.o 00:03:16.867 CC module/keyring/linux/keyring_rpc.o 00:03:16.867 CC module/sock/posix/posix.o 00:03:16.867 CC module/blob/bdev/blob_bdev.o 00:03:16.867 CC module/accel/iaa/accel_iaa.o 00:03:16.867 CC module/scheduler/gscheduler/gscheduler.o 00:03:16.867 CC module/accel/iaa/accel_iaa_rpc.o 00:03:16.867 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:16.867 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:16.867 CC module/accel/error/accel_error.o 00:03:16.867 CC module/accel/error/accel_error_rpc.o 00:03:16.867 CC module/fsdev/aio/fsdev_aio.o 00:03:16.867 SO libspdk_env_dpdk_rpc.so.6.0 00:03:16.867 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:16.867 CC module/accel/dsa/accel_dsa.o 00:03:16.867 CC module/fsdev/aio/linux_aio_mgr.o 00:03:16.867 CC module/accel/dsa/accel_dsa_rpc.o 00:03:16.867 SYMLINK libspdk_env_dpdk_rpc.so 00:03:17.126 LIB libspdk_keyring_file.a 00:03:17.126 LIB libspdk_keyring_linux.a 00:03:17.126 LIB libspdk_scheduler_dpdk_governor.a 00:03:17.126 SO libspdk_keyring_file.so.2.0 00:03:17.126 LIB libspdk_scheduler_gscheduler.a 00:03:17.126 LIB libspdk_accel_ioat.a 00:03:17.126 SO libspdk_scheduler_gscheduler.so.4.0 00:03:17.126 SO libspdk_keyring_linux.so.1.0 00:03:17.126 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:17.126 LIB libspdk_scheduler_dynamic.a 00:03:17.126 SO libspdk_accel_ioat.so.6.0 00:03:17.126 LIB libspdk_accel_iaa.a 00:03:17.126 LIB libspdk_accel_error.a 00:03:17.126 SYMLINK libspdk_keyring_file.so 00:03:17.126 SO libspdk_accel_iaa.so.3.0 00:03:17.126 SO libspdk_scheduler_dynamic.so.4.0 00:03:17.126 SYMLINK libspdk_scheduler_gscheduler.so 00:03:17.126 SYMLINK libspdk_keyring_linux.so 00:03:17.126 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:17.126 SO libspdk_accel_error.so.2.0 00:03:17.126 LIB libspdk_blob_bdev.a 00:03:17.126 LIB libspdk_accel_dsa.a 00:03:17.126 SYMLINK libspdk_accel_ioat.so 00:03:17.126 SO libspdk_blob_bdev.so.12.0 00:03:17.126 SYMLINK libspdk_accel_iaa.so 00:03:17.126 SYMLINK libspdk_scheduler_dynamic.so 00:03:17.126 SO libspdk_accel_dsa.so.5.0 00:03:17.126 SYMLINK libspdk_accel_error.so 00:03:17.126 LIB libspdk_vfu_device.a 00:03:17.126 SYMLINK libspdk_blob_bdev.so 00:03:17.384 SO libspdk_vfu_device.so.3.0 00:03:17.384 SYMLINK libspdk_accel_dsa.so 00:03:17.384 SYMLINK libspdk_vfu_device.so 00:03:17.384 LIB libspdk_fsdev_aio.a 00:03:17.384 SO libspdk_fsdev_aio.so.1.0 00:03:17.384 LIB libspdk_sock_posix.a 00:03:17.643 SO libspdk_sock_posix.so.6.0 00:03:17.643 SYMLINK libspdk_fsdev_aio.so 00:03:17.643 SYMLINK libspdk_sock_posix.so 00:03:17.643 CC module/blobfs/bdev/blobfs_bdev.o 00:03:17.643 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:17.643 CC module/bdev/gpt/gpt.o 00:03:17.643 CC module/bdev/gpt/vbdev_gpt.o 00:03:17.643 CC module/bdev/delay/vbdev_delay.o 00:03:17.643 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:17.643 CC module/bdev/split/vbdev_split.o 00:03:17.643 CC module/bdev/split/vbdev_split_rpc.o 00:03:17.901 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:17.901 CC module/bdev/nvme/bdev_nvme.o 00:03:17.901 CC module/bdev/nvme/nvme_rpc.o 00:03:17.901 CC module/bdev/null/bdev_null.o 00:03:17.901 CC module/bdev/lvol/vbdev_lvol.o 00:03:17.901 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:17.901 CC module/bdev/nvme/bdev_mdns_client.o 00:03:17.901 CC module/bdev/null/bdev_null_rpc.o 00:03:17.901 CC module/bdev/nvme/vbdev_opal.o 00:03:17.901 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:17.901 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:17.901 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:17.901 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:17.901 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:17.901 CC module/bdev/error/vbdev_error_rpc.o 00:03:17.901 CC module/bdev/error/vbdev_error.o 00:03:17.901 CC module/bdev/malloc/bdev_malloc.o 00:03:17.901 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:17.901 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:17.901 CC module/bdev/iscsi/bdev_iscsi.o 00:03:17.901 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:17.901 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:17.901 CC module/bdev/passthru/vbdev_passthru.o 00:03:17.901 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:17.901 CC module/bdev/ftl/bdev_ftl.o 00:03:17.901 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:17.901 CC module/bdev/raid/bdev_raid.o 00:03:17.901 CC module/bdev/raid/bdev_raid_rpc.o 00:03:17.901 CC module/bdev/raid/bdev_raid_sb.o 00:03:17.901 CC module/bdev/aio/bdev_aio.o 00:03:17.901 CC module/bdev/aio/bdev_aio_rpc.o 00:03:17.901 CC module/bdev/raid/raid0.o 00:03:17.901 CC module/bdev/raid/raid1.o 00:03:17.901 CC module/bdev/raid/concat.o 00:03:17.901 LIB libspdk_blobfs_bdev.a 00:03:18.160 SO libspdk_blobfs_bdev.so.6.0 00:03:18.160 LIB libspdk_bdev_split.a 00:03:18.160 SYMLINK libspdk_blobfs_bdev.so 00:03:18.160 LIB libspdk_bdev_null.a 00:03:18.160 SO libspdk_bdev_split.so.6.0 00:03:18.160 LIB libspdk_bdev_gpt.a 00:03:18.160 LIB libspdk_bdev_error.a 00:03:18.160 SO libspdk_bdev_null.so.6.0 00:03:18.160 SO libspdk_bdev_gpt.so.6.0 00:03:18.160 LIB libspdk_bdev_zone_block.a 00:03:18.160 SO libspdk_bdev_error.so.6.0 00:03:18.160 LIB libspdk_bdev_ftl.a 00:03:18.160 LIB libspdk_bdev_delay.a 00:03:18.160 SYMLINK libspdk_bdev_split.so 00:03:18.160 LIB libspdk_bdev_passthru.a 00:03:18.160 LIB libspdk_bdev_malloc.a 00:03:18.160 LIB libspdk_bdev_aio.a 00:03:18.160 SYMLINK libspdk_bdev_null.so 00:03:18.160 SO libspdk_bdev_zone_block.so.6.0 00:03:18.160 SO libspdk_bdev_ftl.so.6.0 00:03:18.160 SO libspdk_bdev_delay.so.6.0 00:03:18.160 SO libspdk_bdev_passthru.so.6.0 00:03:18.160 SO libspdk_bdev_malloc.so.6.0 00:03:18.160 SYMLINK libspdk_bdev_gpt.so 00:03:18.160 SO libspdk_bdev_aio.so.6.0 00:03:18.160 LIB libspdk_bdev_iscsi.a 00:03:18.160 SYMLINK libspdk_bdev_error.so 00:03:18.160 SO libspdk_bdev_iscsi.so.6.0 00:03:18.160 SYMLINK libspdk_bdev_zone_block.so 00:03:18.160 SYMLINK libspdk_bdev_ftl.so 00:03:18.160 SYMLINK libspdk_bdev_delay.so 00:03:18.420 SYMLINK libspdk_bdev_passthru.so 00:03:18.420 SYMLINK libspdk_bdev_malloc.so 00:03:18.420 SYMLINK libspdk_bdev_aio.so 00:03:18.420 LIB libspdk_bdev_lvol.a 00:03:18.420 LIB libspdk_bdev_virtio.a 00:03:18.420 SYMLINK libspdk_bdev_iscsi.so 00:03:18.420 SO libspdk_bdev_lvol.so.6.0 00:03:18.420 SO libspdk_bdev_virtio.so.6.0 00:03:18.420 SYMLINK libspdk_bdev_lvol.so 00:03:18.420 SYMLINK libspdk_bdev_virtio.so 00:03:18.679 LIB libspdk_bdev_raid.a 00:03:18.679 SO libspdk_bdev_raid.so.6.0 00:03:18.679 SYMLINK libspdk_bdev_raid.so 00:03:19.617 LIB libspdk_bdev_nvme.a 00:03:19.617 SO libspdk_bdev_nvme.so.7.1 00:03:19.876 SYMLINK libspdk_bdev_nvme.so 00:03:20.445 CC module/event/subsystems/iobuf/iobuf.o 00:03:20.445 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:20.445 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:20.445 CC module/event/subsystems/vmd/vmd.o 00:03:20.445 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:20.445 CC module/event/subsystems/sock/sock.o 00:03:20.445 CC module/event/subsystems/keyring/keyring.o 00:03:20.445 CC module/event/subsystems/scheduler/scheduler.o 00:03:20.445 CC module/event/subsystems/fsdev/fsdev.o 00:03:20.445 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:20.704 LIB libspdk_event_fsdev.a 00:03:20.704 LIB libspdk_event_sock.a 00:03:20.704 LIB libspdk_event_vmd.a 00:03:20.704 LIB libspdk_event_vhost_blk.a 00:03:20.704 LIB libspdk_event_keyring.a 00:03:20.704 LIB libspdk_event_iobuf.a 00:03:20.704 LIB libspdk_event_scheduler.a 00:03:20.704 LIB libspdk_event_vfu_tgt.a 00:03:20.704 SO libspdk_event_sock.so.5.0 00:03:20.704 SO libspdk_event_fsdev.so.1.0 00:03:20.704 SO libspdk_event_vmd.so.6.0 00:03:20.704 SO libspdk_event_vhost_blk.so.3.0 00:03:20.704 SO libspdk_event_keyring.so.1.0 00:03:20.704 SO libspdk_event_scheduler.so.4.0 00:03:20.704 SO libspdk_event_iobuf.so.3.0 00:03:20.704 SO libspdk_event_vfu_tgt.so.3.0 00:03:20.704 SYMLINK libspdk_event_vmd.so 00:03:20.704 SYMLINK libspdk_event_sock.so 00:03:20.705 SYMLINK libspdk_event_fsdev.so 00:03:20.705 SYMLINK libspdk_event_keyring.so 00:03:20.705 SYMLINK libspdk_event_vhost_blk.so 00:03:20.705 SYMLINK libspdk_event_scheduler.so 00:03:20.705 SYMLINK libspdk_event_vfu_tgt.so 00:03:20.705 SYMLINK libspdk_event_iobuf.so 00:03:21.274 CC module/event/subsystems/accel/accel.o 00:03:21.274 LIB libspdk_event_accel.a 00:03:21.274 SO libspdk_event_accel.so.6.0 00:03:21.274 SYMLINK libspdk_event_accel.so 00:03:21.843 CC module/event/subsystems/bdev/bdev.o 00:03:21.843 LIB libspdk_event_bdev.a 00:03:21.843 SO libspdk_event_bdev.so.6.0 00:03:21.843 SYMLINK libspdk_event_bdev.so 00:03:22.413 CC module/event/subsystems/scsi/scsi.o 00:03:22.413 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:22.413 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:22.413 CC module/event/subsystems/ublk/ublk.o 00:03:22.413 CC module/event/subsystems/nbd/nbd.o 00:03:22.413 LIB libspdk_event_nbd.a 00:03:22.413 LIB libspdk_event_ublk.a 00:03:22.413 LIB libspdk_event_scsi.a 00:03:22.413 SO libspdk_event_nbd.so.6.0 00:03:22.413 SO libspdk_event_ublk.so.3.0 00:03:22.413 SO libspdk_event_scsi.so.6.0 00:03:22.413 LIB libspdk_event_nvmf.a 00:03:22.673 SYMLINK libspdk_event_nbd.so 00:03:22.673 SO libspdk_event_nvmf.so.6.0 00:03:22.673 SYMLINK libspdk_event_ublk.so 00:03:22.673 SYMLINK libspdk_event_scsi.so 00:03:22.673 SYMLINK libspdk_event_nvmf.so 00:03:22.933 CC module/event/subsystems/iscsi/iscsi.o 00:03:22.933 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:23.193 LIB libspdk_event_vhost_scsi.a 00:03:23.193 LIB libspdk_event_iscsi.a 00:03:23.193 SO libspdk_event_vhost_scsi.so.3.0 00:03:23.193 SO libspdk_event_iscsi.so.6.0 00:03:23.193 SYMLINK libspdk_event_vhost_scsi.so 00:03:23.193 SYMLINK libspdk_event_iscsi.so 00:03:23.453 SO libspdk.so.6.0 00:03:23.453 SYMLINK libspdk.so 00:03:23.713 TEST_HEADER include/spdk/accel.h 00:03:23.713 TEST_HEADER include/spdk/accel_module.h 00:03:23.713 TEST_HEADER include/spdk/barrier.h 00:03:23.713 TEST_HEADER include/spdk/assert.h 00:03:23.713 TEST_HEADER include/spdk/base64.h 00:03:23.713 TEST_HEADER include/spdk/bdev_module.h 00:03:23.713 TEST_HEADER include/spdk/bdev.h 00:03:23.713 TEST_HEADER include/spdk/bdev_zone.h 00:03:23.713 TEST_HEADER include/spdk/bit_array.h 00:03:23.713 TEST_HEADER include/spdk/bit_pool.h 00:03:23.713 TEST_HEADER include/spdk/blob_bdev.h 00:03:23.713 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:23.713 TEST_HEADER include/spdk/blobfs.h 00:03:23.713 TEST_HEADER include/spdk/conf.h 00:03:23.713 TEST_HEADER include/spdk/blob.h 00:03:23.713 TEST_HEADER include/spdk/cpuset.h 00:03:23.713 TEST_HEADER include/spdk/config.h 00:03:23.713 TEST_HEADER include/spdk/crc16.h 00:03:23.714 TEST_HEADER include/spdk/crc32.h 00:03:23.714 TEST_HEADER include/spdk/crc64.h 00:03:23.714 TEST_HEADER include/spdk/dma.h 00:03:23.714 TEST_HEADER include/spdk/dif.h 00:03:23.714 CC app/spdk_nvme_identify/identify.o 00:03:23.714 TEST_HEADER include/spdk/endian.h 00:03:23.714 CXX app/trace/trace.o 00:03:23.714 TEST_HEADER include/spdk/env_dpdk.h 00:03:23.714 TEST_HEADER include/spdk/env.h 00:03:23.714 TEST_HEADER include/spdk/event.h 00:03:23.714 TEST_HEADER include/spdk/fd_group.h 00:03:23.714 TEST_HEADER include/spdk/file.h 00:03:23.714 TEST_HEADER include/spdk/fd.h 00:03:23.714 TEST_HEADER include/spdk/fsdev.h 00:03:23.714 TEST_HEADER include/spdk/fsdev_module.h 00:03:23.714 CC app/spdk_nvme_discover/discovery_aer.o 00:03:23.714 TEST_HEADER include/spdk/gpt_spec.h 00:03:23.714 TEST_HEADER include/spdk/ftl.h 00:03:23.714 TEST_HEADER include/spdk/hexlify.h 00:03:23.714 TEST_HEADER include/spdk/idxd.h 00:03:23.714 TEST_HEADER include/spdk/histogram_data.h 00:03:23.714 TEST_HEADER include/spdk/init.h 00:03:23.714 TEST_HEADER include/spdk/idxd_spec.h 00:03:23.714 CC app/trace_record/trace_record.o 00:03:23.714 TEST_HEADER include/spdk/ioat.h 00:03:23.714 TEST_HEADER include/spdk/ioat_spec.h 00:03:23.714 TEST_HEADER include/spdk/iscsi_spec.h 00:03:23.714 CC app/spdk_top/spdk_top.o 00:03:23.714 TEST_HEADER include/spdk/jsonrpc.h 00:03:23.714 TEST_HEADER include/spdk/json.h 00:03:23.714 CC test/rpc_client/rpc_client_test.o 00:03:23.714 CC app/spdk_nvme_perf/perf.o 00:03:23.714 TEST_HEADER include/spdk/keyring.h 00:03:23.714 TEST_HEADER include/spdk/keyring_module.h 00:03:23.714 TEST_HEADER include/spdk/likely.h 00:03:23.714 TEST_HEADER include/spdk/log.h 00:03:23.714 TEST_HEADER include/spdk/memory.h 00:03:23.714 TEST_HEADER include/spdk/lvol.h 00:03:23.714 TEST_HEADER include/spdk/md5.h 00:03:23.714 TEST_HEADER include/spdk/mmio.h 00:03:23.714 TEST_HEADER include/spdk/nbd.h 00:03:23.714 TEST_HEADER include/spdk/net.h 00:03:23.714 CC app/spdk_lspci/spdk_lspci.o 00:03:23.714 TEST_HEADER include/spdk/nvme.h 00:03:23.714 TEST_HEADER include/spdk/notify.h 00:03:23.714 TEST_HEADER include/spdk/nvme_intel.h 00:03:23.714 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:23.714 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:23.714 TEST_HEADER include/spdk/nvme_zns.h 00:03:23.714 TEST_HEADER include/spdk/nvme_spec.h 00:03:23.714 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:23.714 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:23.714 TEST_HEADER include/spdk/nvmf.h 00:03:23.714 TEST_HEADER include/spdk/nvmf_spec.h 00:03:23.714 TEST_HEADER include/spdk/opal.h 00:03:23.714 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:23.714 TEST_HEADER include/spdk/nvmf_transport.h 00:03:23.714 TEST_HEADER include/spdk/opal_spec.h 00:03:23.714 TEST_HEADER include/spdk/pci_ids.h 00:03:23.714 TEST_HEADER include/spdk/pipe.h 00:03:23.714 TEST_HEADER include/spdk/queue.h 00:03:23.714 TEST_HEADER include/spdk/rpc.h 00:03:23.714 TEST_HEADER include/spdk/reduce.h 00:03:23.714 TEST_HEADER include/spdk/scheduler.h 00:03:23.714 TEST_HEADER include/spdk/scsi.h 00:03:23.714 TEST_HEADER include/spdk/scsi_spec.h 00:03:23.714 TEST_HEADER include/spdk/sock.h 00:03:23.714 TEST_HEADER include/spdk/thread.h 00:03:23.714 TEST_HEADER include/spdk/string.h 00:03:23.714 TEST_HEADER include/spdk/stdinc.h 00:03:23.714 TEST_HEADER include/spdk/trace.h 00:03:23.714 TEST_HEADER include/spdk/trace_parser.h 00:03:23.714 TEST_HEADER include/spdk/tree.h 00:03:23.714 TEST_HEADER include/spdk/ublk.h 00:03:23.714 TEST_HEADER include/spdk/version.h 00:03:23.714 TEST_HEADER include/spdk/uuid.h 00:03:23.714 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:23.714 TEST_HEADER include/spdk/util.h 00:03:23.714 TEST_HEADER include/spdk/vhost.h 00:03:23.714 CC app/nvmf_tgt/nvmf_main.o 00:03:23.714 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:23.714 CC app/iscsi_tgt/iscsi_tgt.o 00:03:23.714 TEST_HEADER include/spdk/vmd.h 00:03:23.714 TEST_HEADER include/spdk/zipf.h 00:03:23.714 TEST_HEADER include/spdk/xor.h 00:03:23.714 CXX test/cpp_headers/accel.o 00:03:23.714 CXX test/cpp_headers/assert.o 00:03:23.714 CC app/spdk_dd/spdk_dd.o 00:03:23.714 CXX test/cpp_headers/base64.o 00:03:23.714 CXX test/cpp_headers/accel_module.o 00:03:23.714 CXX test/cpp_headers/barrier.o 00:03:23.714 CXX test/cpp_headers/bdev.o 00:03:23.714 CXX test/cpp_headers/bdev_zone.o 00:03:23.714 CXX test/cpp_headers/bdev_module.o 00:03:23.714 CXX test/cpp_headers/bit_pool.o 00:03:23.714 CXX test/cpp_headers/bit_array.o 00:03:23.714 CXX test/cpp_headers/blob_bdev.o 00:03:23.714 CXX test/cpp_headers/blobfs_bdev.o 00:03:23.714 CXX test/cpp_headers/blobfs.o 00:03:23.714 CXX test/cpp_headers/conf.o 00:03:23.714 CXX test/cpp_headers/cpuset.o 00:03:23.714 CXX test/cpp_headers/blob.o 00:03:23.714 CXX test/cpp_headers/config.o 00:03:23.714 CXX test/cpp_headers/crc16.o 00:03:23.714 CXX test/cpp_headers/dma.o 00:03:23.714 CXX test/cpp_headers/crc32.o 00:03:23.714 CXX test/cpp_headers/dif.o 00:03:23.714 CXX test/cpp_headers/endian.o 00:03:23.714 CXX test/cpp_headers/crc64.o 00:03:23.714 CXX test/cpp_headers/env_dpdk.o 00:03:23.714 CXX test/cpp_headers/env.o 00:03:23.714 CXX test/cpp_headers/event.o 00:03:23.988 CXX test/cpp_headers/fd.o 00:03:23.988 CC app/spdk_tgt/spdk_tgt.o 00:03:23.988 CXX test/cpp_headers/file.o 00:03:23.988 CXX test/cpp_headers/fsdev.o 00:03:23.988 CXX test/cpp_headers/fsdev_module.o 00:03:23.988 CXX test/cpp_headers/fd_group.o 00:03:23.988 CXX test/cpp_headers/ftl.o 00:03:23.988 CXX test/cpp_headers/hexlify.o 00:03:23.988 CXX test/cpp_headers/histogram_data.o 00:03:23.988 CXX test/cpp_headers/gpt_spec.o 00:03:23.988 CXX test/cpp_headers/idxd.o 00:03:23.988 CXX test/cpp_headers/init.o 00:03:23.988 CXX test/cpp_headers/idxd_spec.o 00:03:23.988 CXX test/cpp_headers/ioat.o 00:03:23.988 CXX test/cpp_headers/ioat_spec.o 00:03:23.988 CXX test/cpp_headers/iscsi_spec.o 00:03:23.988 CXX test/cpp_headers/json.o 00:03:23.988 CXX test/cpp_headers/keyring_module.o 00:03:23.988 CXX test/cpp_headers/jsonrpc.o 00:03:23.988 CXX test/cpp_headers/keyring.o 00:03:23.988 CXX test/cpp_headers/likely.o 00:03:23.988 CXX test/cpp_headers/log.o 00:03:23.988 CXX test/cpp_headers/mmio.o 00:03:23.988 CXX test/cpp_headers/lvol.o 00:03:23.988 CXX test/cpp_headers/md5.o 00:03:23.988 CXX test/cpp_headers/memory.o 00:03:23.988 CXX test/cpp_headers/net.o 00:03:23.988 CXX test/cpp_headers/nbd.o 00:03:23.988 CXX test/cpp_headers/nvme.o 00:03:23.988 CXX test/cpp_headers/notify.o 00:03:23.988 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:23.988 CXX test/cpp_headers/nvme_intel.o 00:03:23.988 CXX test/cpp_headers/nvme_ocssd.o 00:03:23.988 CXX test/cpp_headers/nvmf_cmd.o 00:03:23.988 CXX test/cpp_headers/nvme_spec.o 00:03:23.988 CXX test/cpp_headers/nvme_zns.o 00:03:23.988 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:23.988 CXX test/cpp_headers/nvmf.o 00:03:23.988 CXX test/cpp_headers/nvmf_spec.o 00:03:23.988 CXX test/cpp_headers/opal.o 00:03:23.988 CXX test/cpp_headers/nvmf_transport.o 00:03:23.988 CXX test/cpp_headers/opal_spec.o 00:03:23.988 CXX test/cpp_headers/pci_ids.o 00:03:23.988 CC examples/ioat/verify/verify.o 00:03:23.988 CC examples/util/zipf/zipf.o 00:03:23.988 CC examples/ioat/perf/perf.o 00:03:23.988 CC test/env/vtophys/vtophys.o 00:03:23.988 CC test/env/memory/memory_ut.o 00:03:23.988 CC test/app/stub/stub.o 00:03:23.988 CC test/app/histogram_perf/histogram_perf.o 00:03:24.264 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:24.264 CC test/app/jsoncat/jsoncat.o 00:03:24.264 CC test/dma/test_dma/test_dma.o 00:03:24.264 CC test/thread/poller_perf/poller_perf.o 00:03:24.264 CC app/fio/nvme/fio_plugin.o 00:03:24.264 CC test/app/bdev_svc/bdev_svc.o 00:03:24.264 CC test/env/pci/pci_ut.o 00:03:24.264 LINK spdk_lspci 00:03:24.264 CC app/fio/bdev/fio_plugin.o 00:03:24.264 LINK rpc_client_test 00:03:24.264 LINK interrupt_tgt 00:03:24.264 LINK spdk_nvme_discover 00:03:24.527 LINK iscsi_tgt 00:03:24.527 CXX test/cpp_headers/queue.o 00:03:24.527 CXX test/cpp_headers/pipe.o 00:03:24.527 CXX test/cpp_headers/reduce.o 00:03:24.527 CXX test/cpp_headers/rpc.o 00:03:24.527 CXX test/cpp_headers/scheduler.o 00:03:24.527 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:24.527 CXX test/cpp_headers/scsi.o 00:03:24.527 LINK spdk_tgt 00:03:24.527 CXX test/cpp_headers/scsi_spec.o 00:03:24.527 CXX test/cpp_headers/sock.o 00:03:24.527 CXX test/cpp_headers/stdinc.o 00:03:24.527 CC test/env/mem_callbacks/mem_callbacks.o 00:03:24.527 CXX test/cpp_headers/string.o 00:03:24.527 CXX test/cpp_headers/trace.o 00:03:24.527 CXX test/cpp_headers/trace_parser.o 00:03:24.527 CXX test/cpp_headers/tree.o 00:03:24.527 CXX test/cpp_headers/thread.o 00:03:24.527 CXX test/cpp_headers/ublk.o 00:03:24.527 LINK zipf 00:03:24.527 CXX test/cpp_headers/util.o 00:03:24.527 CXX test/cpp_headers/uuid.o 00:03:24.527 CXX test/cpp_headers/version.o 00:03:24.527 CXX test/cpp_headers/vfio_user_pci.o 00:03:24.527 CXX test/cpp_headers/vfio_user_spec.o 00:03:24.527 CXX test/cpp_headers/vhost.o 00:03:24.527 CXX test/cpp_headers/vmd.o 00:03:24.527 CXX test/cpp_headers/xor.o 00:03:24.527 CXX test/cpp_headers/zipf.o 00:03:24.527 LINK histogram_perf 00:03:24.527 LINK nvmf_tgt 00:03:24.527 LINK env_dpdk_post_init 00:03:24.527 LINK verify 00:03:24.527 LINK ioat_perf 00:03:24.527 LINK spdk_trace_record 00:03:24.786 LINK vtophys 00:03:24.786 LINK poller_perf 00:03:24.786 LINK jsoncat 00:03:24.786 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:24.786 LINK stub 00:03:24.786 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:24.786 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:24.786 LINK bdev_svc 00:03:24.786 LINK spdk_dd 00:03:24.786 LINK mem_callbacks 00:03:25.045 LINK spdk_trace 00:03:25.045 LINK pci_ut 00:03:25.045 LINK test_dma 00:03:25.045 CC examples/idxd/perf/perf.o 00:03:25.045 LINK spdk_top 00:03:25.045 CC examples/sock/hello_world/hello_sock.o 00:03:25.045 LINK spdk_nvme_perf 00:03:25.045 LINK nvme_fuzz 00:03:25.045 LINK spdk_nvme_identify 00:03:25.045 CC examples/vmd/led/led.o 00:03:25.045 CC examples/vmd/lsvmd/lsvmd.o 00:03:25.045 CC examples/thread/thread/thread_ex.o 00:03:25.045 LINK vhost_fuzz 00:03:25.303 CC test/event/event_perf/event_perf.o 00:03:25.303 CC test/event/reactor/reactor.o 00:03:25.303 CC test/event/reactor_perf/reactor_perf.o 00:03:25.303 CC test/event/app_repeat/app_repeat.o 00:03:25.303 LINK spdk_bdev 00:03:25.303 LINK spdk_nvme 00:03:25.303 CC test/event/scheduler/scheduler.o 00:03:25.303 LINK memory_ut 00:03:25.303 CC app/vhost/vhost.o 00:03:25.303 LINK lsvmd 00:03:25.303 LINK led 00:03:25.303 LINK hello_sock 00:03:25.303 LINK event_perf 00:03:25.303 LINK reactor 00:03:25.303 LINK reactor_perf 00:03:25.303 LINK app_repeat 00:03:25.303 LINK thread 00:03:25.303 LINK idxd_perf 00:03:25.562 LINK vhost 00:03:25.562 CC test/nvme/reset/reset.o 00:03:25.562 CC test/nvme/sgl/sgl.o 00:03:25.562 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:25.562 CC test/nvme/e2edp/nvme_dp.o 00:03:25.562 CC test/nvme/err_injection/err_injection.o 00:03:25.562 CC test/nvme/overhead/overhead.o 00:03:25.562 LINK scheduler 00:03:25.562 CC test/nvme/startup/startup.o 00:03:25.562 CC test/nvme/fdp/fdp.o 00:03:25.562 CC test/nvme/simple_copy/simple_copy.o 00:03:25.562 CC test/nvme/cuse/cuse.o 00:03:25.562 CC test/nvme/fused_ordering/fused_ordering.o 00:03:25.562 CC test/nvme/connect_stress/connect_stress.o 00:03:25.562 CC test/nvme/compliance/nvme_compliance.o 00:03:25.562 CC test/nvme/reserve/reserve.o 00:03:25.562 CC test/nvme/aer/aer.o 00:03:25.562 CC test/nvme/boot_partition/boot_partition.o 00:03:25.562 CC test/blobfs/mkfs/mkfs.o 00:03:25.562 CC test/accel/dif/dif.o 00:03:25.562 CC test/lvol/esnap/esnap.o 00:03:25.820 LINK doorbell_aers 00:03:25.820 LINK startup 00:03:25.820 LINK err_injection 00:03:25.820 LINK boot_partition 00:03:25.820 LINK connect_stress 00:03:25.820 LINK fused_ordering 00:03:25.820 LINK reserve 00:03:25.820 LINK simple_copy 00:03:25.820 LINK sgl 00:03:25.820 LINK reset 00:03:25.820 LINK mkfs 00:03:25.820 CC examples/nvme/abort/abort.o 00:03:25.820 CC examples/nvme/arbitration/arbitration.o 00:03:25.820 CC examples/nvme/hotplug/hotplug.o 00:03:25.820 LINK overhead 00:03:25.820 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:25.820 LINK nvme_dp 00:03:25.820 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:25.820 LINK aer 00:03:25.820 CC examples/nvme/reconnect/reconnect.o 00:03:25.820 CC examples/nvme/hello_world/hello_world.o 00:03:25.820 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:25.820 LINK fdp 00:03:25.820 LINK nvme_compliance 00:03:25.820 CC examples/accel/perf/accel_perf.o 00:03:26.079 CC examples/blob/cli/blobcli.o 00:03:26.079 CC examples/blob/hello_world/hello_blob.o 00:03:26.079 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:26.079 LINK pmr_persistence 00:03:26.079 LINK cmb_copy 00:03:26.079 LINK hotplug 00:03:26.079 LINK hello_world 00:03:26.079 LINK arbitration 00:03:26.079 LINK reconnect 00:03:26.079 LINK abort 00:03:26.079 LINK dif 00:03:26.079 LINK iscsi_fuzz 00:03:26.079 LINK hello_blob 00:03:26.338 LINK hello_fsdev 00:03:26.338 LINK nvme_manage 00:03:26.338 LINK accel_perf 00:03:26.338 LINK blobcli 00:03:26.598 LINK cuse 00:03:26.873 CC test/bdev/bdevio/bdevio.o 00:03:26.873 CC examples/bdev/hello_world/hello_bdev.o 00:03:26.873 CC examples/bdev/bdevperf/bdevperf.o 00:03:27.156 LINK bdevio 00:03:27.156 LINK hello_bdev 00:03:27.451 LINK bdevperf 00:03:28.050 CC examples/nvmf/nvmf/nvmf.o 00:03:28.336 LINK nvmf 00:03:29.354 LINK esnap 00:03:29.622 00:03:29.622 real 0m55.482s 00:03:29.622 user 6m47.389s 00:03:29.622 sys 2m54.812s 00:03:29.622 05:19:29 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:29.622 05:19:29 make -- common/autotest_common.sh@10 -- $ set +x 00:03:29.622 ************************************ 00:03:29.622 END TEST make 00:03:29.622 ************************************ 00:03:29.622 05:19:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:29.622 05:19:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:29.622 05:19:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:29.622 05:19:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.622 05:19:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:29.622 05:19:29 -- pm/common@44 -- $ pid=7583 00:03:29.622 05:19:29 -- pm/common@50 -- $ kill -TERM 7583 00:03:29.622 05:19:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.622 05:19:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:29.622 05:19:29 -- pm/common@44 -- $ pid=7584 00:03:29.622 05:19:29 -- pm/common@50 -- $ kill -TERM 7584 00:03:29.622 05:19:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.622 05:19:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:29.622 05:19:29 -- pm/common@44 -- $ pid=7587 00:03:29.622 05:19:29 -- pm/common@50 -- $ kill -TERM 7587 00:03:29.622 05:19:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.622 05:19:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:29.622 05:19:29 -- pm/common@44 -- $ pid=7613 00:03:29.622 05:19:29 -- pm/common@50 -- $ sudo -E kill -TERM 7613 00:03:29.622 05:19:29 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:29.622 05:19:29 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:29.622 05:19:29 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:29.622 05:19:29 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:29.622 05:19:29 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:29.882 05:19:29 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:29.882 05:19:29 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:29.882 05:19:29 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:29.882 05:19:29 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:29.882 05:19:29 -- scripts/common.sh@336 -- # IFS=.-: 00:03:29.882 05:19:29 -- scripts/common.sh@336 -- # read -ra ver1 00:03:29.882 05:19:29 -- scripts/common.sh@337 -- # IFS=.-: 00:03:29.882 05:19:29 -- scripts/common.sh@337 -- # read -ra ver2 00:03:29.882 05:19:29 -- scripts/common.sh@338 -- # local 'op=<' 00:03:29.882 05:19:29 -- scripts/common.sh@340 -- # ver1_l=2 00:03:29.882 05:19:29 -- scripts/common.sh@341 -- # ver2_l=1 00:03:29.882 05:19:29 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:29.882 05:19:29 -- scripts/common.sh@344 -- # case "$op" in 00:03:29.882 05:19:29 -- scripts/common.sh@345 -- # : 1 00:03:29.882 05:19:29 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:29.882 05:19:29 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.882 05:19:29 -- scripts/common.sh@365 -- # decimal 1 00:03:29.882 05:19:29 -- scripts/common.sh@353 -- # local d=1 00:03:29.882 05:19:29 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.882 05:19:29 -- scripts/common.sh@355 -- # echo 1 00:03:29.882 05:19:29 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:29.882 05:19:29 -- scripts/common.sh@366 -- # decimal 2 00:03:29.882 05:19:29 -- scripts/common.sh@353 -- # local d=2 00:03:29.882 05:19:29 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.882 05:19:29 -- scripts/common.sh@355 -- # echo 2 00:03:29.882 05:19:29 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:29.882 05:19:29 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:29.882 05:19:29 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:29.882 05:19:29 -- scripts/common.sh@368 -- # return 0 00:03:29.882 05:19:29 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.882 05:19:29 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:29.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.882 --rc genhtml_branch_coverage=1 00:03:29.882 --rc genhtml_function_coverage=1 00:03:29.882 --rc genhtml_legend=1 00:03:29.882 --rc geninfo_all_blocks=1 00:03:29.882 --rc geninfo_unexecuted_blocks=1 00:03:29.882 00:03:29.882 ' 00:03:29.882 05:19:29 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:29.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.882 --rc genhtml_branch_coverage=1 00:03:29.882 --rc genhtml_function_coverage=1 00:03:29.882 --rc genhtml_legend=1 00:03:29.882 --rc geninfo_all_blocks=1 00:03:29.882 --rc geninfo_unexecuted_blocks=1 00:03:29.882 00:03:29.882 ' 00:03:29.882 05:19:29 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:29.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.882 --rc genhtml_branch_coverage=1 00:03:29.882 --rc genhtml_function_coverage=1 00:03:29.882 --rc genhtml_legend=1 00:03:29.882 --rc geninfo_all_blocks=1 00:03:29.882 --rc geninfo_unexecuted_blocks=1 00:03:29.882 00:03:29.882 ' 00:03:29.882 05:19:29 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:29.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.882 --rc genhtml_branch_coverage=1 00:03:29.882 --rc genhtml_function_coverage=1 00:03:29.882 --rc genhtml_legend=1 00:03:29.882 --rc geninfo_all_blocks=1 00:03:29.882 --rc geninfo_unexecuted_blocks=1 00:03:29.882 00:03:29.882 ' 00:03:29.882 05:19:29 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:29.882 05:19:29 -- nvmf/common.sh@7 -- # uname -s 00:03:29.882 05:19:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:29.882 05:19:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:29.882 05:19:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:29.882 05:19:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:29.882 05:19:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:29.882 05:19:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:29.882 05:19:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:29.882 05:19:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:29.882 05:19:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:29.882 05:19:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:29.882 05:19:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:29.882 05:19:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:29.882 05:19:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:29.882 05:19:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:29.882 05:19:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:29.882 05:19:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:29.882 05:19:29 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:29.882 05:19:29 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:29.882 05:19:29 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:29.882 05:19:29 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:29.882 05:19:29 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:29.882 05:19:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.882 05:19:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.882 05:19:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.882 05:19:29 -- paths/export.sh@5 -- # export PATH 00:03:29.882 05:19:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.883 05:19:29 -- nvmf/common.sh@51 -- # : 0 00:03:29.883 05:19:29 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:29.883 05:19:29 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:29.883 05:19:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:29.883 05:19:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:29.883 05:19:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:29.883 05:19:29 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:29.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:29.883 05:19:29 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:29.883 05:19:29 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:29.883 05:19:29 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:29.883 05:19:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:29.883 05:19:29 -- spdk/autotest.sh@32 -- # uname -s 00:03:29.883 05:19:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:29.883 05:19:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:29.883 05:19:29 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:29.883 05:19:29 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:29.883 05:19:29 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:29.883 05:19:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:29.883 05:19:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:29.883 05:19:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:29.883 05:19:29 -- spdk/autotest.sh@48 -- # udevadm_pid=88034 00:03:29.883 05:19:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:29.883 05:19:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:29.883 05:19:29 -- pm/common@17 -- # local monitor 00:03:29.883 05:19:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.883 05:19:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.883 05:19:29 -- pm/common@21 -- # date +%s 00:03:29.883 05:19:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.883 05:19:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.883 05:19:29 -- pm/common@21 -- # date +%s 00:03:29.883 05:19:29 -- pm/common@25 -- # sleep 1 00:03:29.883 05:19:29 -- pm/common@21 -- # date +%s 00:03:29.883 05:19:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734063569 00:03:29.883 05:19:29 -- pm/common@21 -- # date +%s 00:03:29.883 05:19:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734063569 00:03:29.883 05:19:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734063569 00:03:29.883 05:19:29 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734063569 00:03:29.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734063569_collect-cpu-load.pm.log 00:03:29.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734063569_collect-cpu-temp.pm.log 00:03:29.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734063569_collect-vmstat.pm.log 00:03:29.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734063569_collect-bmc-pm.bmc.pm.log 00:03:30.822 05:19:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:30.822 05:19:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:30.822 05:19:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:30.822 05:19:30 -- common/autotest_common.sh@10 -- # set +x 00:03:31.081 05:19:30 -- spdk/autotest.sh@59 -- # create_test_list 00:03:31.081 05:19:30 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:31.081 05:19:30 -- common/autotest_common.sh@10 -- # set +x 00:03:31.081 05:19:30 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:31.081 05:19:30 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:31.081 05:19:30 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:31.081 05:19:30 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:31.081 05:19:30 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:31.081 05:19:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:31.081 05:19:30 -- common/autotest_common.sh@1457 -- # uname 00:03:31.081 05:19:30 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:31.081 05:19:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:31.081 05:19:30 -- common/autotest_common.sh@1477 -- # uname 00:03:31.081 05:19:30 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:31.081 05:19:30 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:31.081 05:19:30 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:31.081 lcov: LCOV version 1.15 00:03:31.081 05:19:30 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:53.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:53.029 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:56.322 05:19:56 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:56.323 05:19:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.323 05:19:56 -- common/autotest_common.sh@10 -- # set +x 00:03:56.323 05:19:56 -- spdk/autotest.sh@78 -- # rm -f 00:03:56.323 05:19:56 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.862 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:58.862 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:58.862 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:58.862 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:58.862 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:59.122 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:59.122 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:59.122 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:59.122 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:59.122 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:59.122 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:59.122 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:59.122 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:59.122 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:59.122 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:59.122 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:59.122 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:59.381 05:19:59 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:59.381 05:19:59 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:59.381 05:19:59 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:59.381 05:19:59 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:59.381 05:19:59 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:59.381 05:19:59 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:59.381 05:19:59 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:59.381 05:19:59 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:59.381 05:19:59 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:59.381 05:19:59 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:59.381 05:19:59 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:59.381 05:19:59 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.381 05:19:59 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:59.381 05:19:59 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:59.381 05:19:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.381 05:19:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.381 05:19:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:59.381 05:19:59 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:59.381 05:19:59 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:59.381 No valid GPT data, bailing 00:03:59.381 05:19:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.381 05:19:59 -- scripts/common.sh@394 -- # pt= 00:03:59.381 05:19:59 -- scripts/common.sh@395 -- # return 1 00:03:59.381 05:19:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:59.381 1+0 records in 00:03:59.381 1+0 records out 00:03:59.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00168435 s, 623 MB/s 00:03:59.381 05:19:59 -- spdk/autotest.sh@105 -- # sync 00:03:59.381 05:19:59 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:59.381 05:19:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:59.381 05:19:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:04.660 05:20:04 -- spdk/autotest.sh@111 -- # uname -s 00:04:04.660 05:20:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:04.660 05:20:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:04.660 05:20:04 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:07.954 Hugepages 00:04:07.954 node hugesize free / total 00:04:07.954 node0 1048576kB 0 / 0 00:04:07.954 node0 2048kB 0 / 0 00:04:07.954 node1 1048576kB 0 / 0 00:04:07.954 node1 2048kB 0 / 0 00:04:07.954 00:04:07.954 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:07.954 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:07.954 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:07.954 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:07.954 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:07.954 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:07.954 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:07.954 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:07.954 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:07.954 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:07.954 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:07.954 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:07.954 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:07.954 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:07.954 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:07.955 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:07.955 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:07.955 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:07.955 05:20:07 -- spdk/autotest.sh@117 -- # uname -s 00:04:07.955 05:20:07 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:07.955 05:20:07 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:07.955 05:20:07 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.493 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:10.493 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:11.432 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.432 05:20:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:12.813 05:20:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:12.813 05:20:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:12.813 05:20:12 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:12.813 05:20:12 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:12.813 05:20:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:12.813 05:20:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:12.813 05:20:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.813 05:20:12 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:12.813 05:20:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:12.813 05:20:12 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:12.813 05:20:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:12.813 05:20:12 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.352 Waiting for block devices as requested 00:04:15.352 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:15.612 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:15.612 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:15.612 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:15.612 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:15.871 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:15.871 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:15.871 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:16.130 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:16.130 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:16.130 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:16.389 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:16.389 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:16.389 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:16.389 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:16.649 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:16.649 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:16.649 05:20:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:16.649 05:20:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:16.649 05:20:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:16.649 05:20:16 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:16.649 05:20:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:16.649 05:20:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:16.649 05:20:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:16.649 05:20:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:16.649 05:20:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:16.649 05:20:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:16.649 05:20:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:16.649 05:20:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:16.649 05:20:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:16.649 05:20:16 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:16.649 05:20:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:16.649 05:20:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:16.649 05:20:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:16.649 05:20:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:16.649 05:20:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:16.909 05:20:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:16.909 05:20:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:16.909 05:20:16 -- common/autotest_common.sh@1543 -- # continue 00:04:16.909 05:20:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:16.909 05:20:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.909 05:20:16 -- common/autotest_common.sh@10 -- # set +x 00:04:16.909 05:20:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:16.909 05:20:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.909 05:20:16 -- common/autotest_common.sh@10 -- # set +x 00:04:16.909 05:20:16 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:20.203 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:20.203 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:20.463 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:20.723 05:20:20 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:20.723 05:20:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.723 05:20:20 -- common/autotest_common.sh@10 -- # set +x 00:04:20.723 05:20:20 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:20.723 05:20:20 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:20.723 05:20:20 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:20.723 05:20:20 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:20.723 05:20:20 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:20.723 05:20:20 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:20.723 05:20:20 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:20.723 05:20:20 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:20.723 05:20:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:20.723 05:20:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:20.723 05:20:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.723 05:20:20 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:20.723 05:20:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:20.723 05:20:20 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:20.723 05:20:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:20.723 05:20:20 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:20.723 05:20:20 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:20.723 05:20:20 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:20.723 05:20:20 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:20.723 05:20:20 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:20.723 05:20:20 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:20.723 05:20:20 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:20.982 05:20:20 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:20.982 05:20:20 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=102185 00:04:20.982 05:20:20 -- common/autotest_common.sh@1585 -- # waitforlisten 102185 00:04:20.982 05:20:20 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.982 05:20:20 -- common/autotest_common.sh@835 -- # '[' -z 102185 ']' 00:04:20.982 05:20:20 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.982 05:20:20 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.982 05:20:20 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.982 05:20:20 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.982 05:20:20 -- common/autotest_common.sh@10 -- # set +x 00:04:20.982 [2024-12-13 05:20:20.803419] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:20.982 [2024-12-13 05:20:20.803477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102185 ] 00:04:20.982 [2024-12-13 05:20:20.877313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.982 [2024-12-13 05:20:20.899231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.241 05:20:21 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.241 05:20:21 -- common/autotest_common.sh@868 -- # return 0 00:04:21.241 05:20:21 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:21.241 05:20:21 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:21.241 05:20:21 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:24.534 nvme0n1 00:04:24.535 05:20:24 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:24.535 [2024-12-13 05:20:24.273843] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:24.535 [2024-12-13 05:20:24.273874] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:24.535 request: 00:04:24.535 { 00:04:24.535 "nvme_ctrlr_name": "nvme0", 00:04:24.535 "password": "test", 00:04:24.535 "method": "bdev_nvme_opal_revert", 00:04:24.535 "req_id": 1 00:04:24.535 } 00:04:24.535 Got JSON-RPC error response 00:04:24.535 response: 00:04:24.535 { 00:04:24.535 "code": -32603, 00:04:24.535 "message": "Internal error" 00:04:24.535 } 00:04:24.535 05:20:24 -- common/autotest_common.sh@1591 -- # true 00:04:24.535 05:20:24 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:24.535 05:20:24 -- common/autotest_common.sh@1595 -- # killprocess 102185 00:04:24.535 05:20:24 -- common/autotest_common.sh@954 -- # '[' -z 102185 ']' 00:04:24.535 05:20:24 -- common/autotest_common.sh@958 -- # kill -0 102185 00:04:24.535 05:20:24 -- common/autotest_common.sh@959 -- # uname 00:04:24.535 05:20:24 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.535 05:20:24 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102185 00:04:24.535 05:20:24 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.535 05:20:24 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.535 05:20:24 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102185' 00:04:24.535 killing process with pid 102185 00:04:24.535 05:20:24 -- common/autotest_common.sh@973 -- # kill 102185 00:04:24.535 05:20:24 -- common/autotest_common.sh@978 -- # wait 102185 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:24.535 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:26.437 05:20:25 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:26.437 05:20:25 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:26.437 05:20:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:26.437 05:20:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:26.437 05:20:25 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:26.437 05:20:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.437 05:20:25 -- common/autotest_common.sh@10 -- # set +x 00:04:26.437 05:20:25 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:26.437 05:20:25 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:26.437 05:20:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.437 05:20:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.437 05:20:25 -- common/autotest_common.sh@10 -- # set +x 00:04:26.437 ************************************ 00:04:26.437 START TEST env 00:04:26.437 ************************************ 00:04:26.437 05:20:25 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:26.437 * Looking for test storage... 00:04:26.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:26.437 05:20:26 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:26.437 05:20:26 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:26.437 05:20:26 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:26.437 05:20:26 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:26.437 05:20:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.437 05:20:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.437 05:20:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.437 05:20:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.437 05:20:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.437 05:20:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.437 05:20:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.437 05:20:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.437 05:20:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.437 05:20:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.437 05:20:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.437 05:20:26 env -- scripts/common.sh@344 -- # case "$op" in 00:04:26.437 05:20:26 env -- scripts/common.sh@345 -- # : 1 00:04:26.437 05:20:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.438 05:20:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.438 05:20:26 env -- scripts/common.sh@365 -- # decimal 1 00:04:26.438 05:20:26 env -- scripts/common.sh@353 -- # local d=1 00:04:26.438 05:20:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.438 05:20:26 env -- scripts/common.sh@355 -- # echo 1 00:04:26.438 05:20:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.438 05:20:26 env -- scripts/common.sh@366 -- # decimal 2 00:04:26.438 05:20:26 env -- scripts/common.sh@353 -- # local d=2 00:04:26.438 05:20:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.438 05:20:26 env -- scripts/common.sh@355 -- # echo 2 00:04:26.438 05:20:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.438 05:20:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.438 05:20:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.438 05:20:26 env -- scripts/common.sh@368 -- # return 0 00:04:26.438 05:20:26 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.438 05:20:26 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:26.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.438 --rc genhtml_branch_coverage=1 00:04:26.438 --rc genhtml_function_coverage=1 00:04:26.438 --rc genhtml_legend=1 00:04:26.438 --rc geninfo_all_blocks=1 00:04:26.438 --rc geninfo_unexecuted_blocks=1 00:04:26.438 00:04:26.438 ' 00:04:26.438 05:20:26 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:26.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.438 --rc genhtml_branch_coverage=1 00:04:26.438 --rc genhtml_function_coverage=1 00:04:26.438 --rc genhtml_legend=1 00:04:26.438 --rc geninfo_all_blocks=1 00:04:26.438 --rc geninfo_unexecuted_blocks=1 00:04:26.438 00:04:26.438 ' 00:04:26.438 05:20:26 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:26.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.438 --rc genhtml_branch_coverage=1 00:04:26.438 --rc genhtml_function_coverage=1 00:04:26.438 --rc genhtml_legend=1 00:04:26.438 --rc geninfo_all_blocks=1 00:04:26.438 --rc geninfo_unexecuted_blocks=1 00:04:26.438 00:04:26.438 ' 00:04:26.438 05:20:26 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:26.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.438 --rc genhtml_branch_coverage=1 00:04:26.438 --rc genhtml_function_coverage=1 00:04:26.438 --rc genhtml_legend=1 00:04:26.438 --rc geninfo_all_blocks=1 00:04:26.438 --rc geninfo_unexecuted_blocks=1 00:04:26.438 00:04:26.438 ' 00:04:26.438 05:20:26 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:26.438 05:20:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.438 05:20:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.438 05:20:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.438 ************************************ 00:04:26.438 START TEST env_memory 00:04:26.438 ************************************ 00:04:26.438 05:20:26 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:26.438 00:04:26.438 00:04:26.438 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.438 http://cunit.sourceforge.net/ 00:04:26.438 00:04:26.438 00:04:26.438 Suite: memory 00:04:26.438 Test: alloc and free memory map ...[2024-12-13 05:20:26.226586] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:26.438 passed 00:04:26.438 Test: mem map translation ...[2024-12-13 05:20:26.245292] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:26.438 [2024-12-13 05:20:26.245306] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:26.438 [2024-12-13 05:20:26.245342] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:26.438 [2024-12-13 05:20:26.245349] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:26.438 passed 00:04:26.438 Test: mem map registration ...[2024-12-13 05:20:26.281569] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:26.438 [2024-12-13 05:20:26.281593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:26.438 passed 00:04:26.438 Test: mem map adjacent registrations ...passed 00:04:26.438 00:04:26.438 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.438 suites 1 1 n/a 0 0 00:04:26.438 tests 4 4 4 0 0 00:04:26.438 asserts 152 152 152 0 n/a 00:04:26.438 00:04:26.438 Elapsed time = 0.123 seconds 00:04:26.438 00:04:26.438 real 0m0.132s 00:04:26.438 user 0m0.121s 00:04:26.438 sys 0m0.011s 00:04:26.438 05:20:26 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.438 05:20:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:26.438 ************************************ 00:04:26.438 END TEST env_memory 00:04:26.438 ************************************ 00:04:26.438 05:20:26 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:26.438 05:20:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.438 05:20:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.438 05:20:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.438 ************************************ 00:04:26.438 START TEST env_vtophys 00:04:26.438 ************************************ 00:04:26.438 05:20:26 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:26.438 EAL: lib.eal log level changed from notice to debug 00:04:26.438 EAL: Detected lcore 0 as core 0 on socket 0 00:04:26.438 EAL: Detected lcore 1 as core 1 on socket 0 00:04:26.438 EAL: Detected lcore 2 as core 2 on socket 0 00:04:26.438 EAL: Detected lcore 3 as core 3 on socket 0 00:04:26.438 EAL: Detected lcore 4 as core 4 on socket 0 00:04:26.438 EAL: Detected lcore 5 as core 5 on socket 0 00:04:26.438 EAL: Detected lcore 6 as core 6 on socket 0 00:04:26.438 EAL: Detected lcore 7 as core 8 on socket 0 00:04:26.438 EAL: Detected lcore 8 as core 9 on socket 0 00:04:26.438 EAL: Detected lcore 9 as core 10 on socket 0 00:04:26.438 EAL: Detected lcore 10 as core 11 on socket 0 00:04:26.438 EAL: Detected lcore 11 as core 12 on socket 0 00:04:26.438 EAL: Detected lcore 12 as core 13 on socket 0 00:04:26.438 EAL: Detected lcore 13 as core 16 on socket 0 00:04:26.438 EAL: Detected lcore 14 as core 17 on socket 0 00:04:26.438 EAL: Detected lcore 15 as core 18 on socket 0 00:04:26.438 EAL: Detected lcore 16 as core 19 on socket 0 00:04:26.438 EAL: Detected lcore 17 as core 20 on socket 0 00:04:26.438 EAL: Detected lcore 18 as core 21 on socket 0 00:04:26.438 EAL: Detected lcore 19 as core 25 on socket 0 00:04:26.438 EAL: Detected lcore 20 as core 26 on socket 0 00:04:26.438 EAL: Detected lcore 21 as core 27 on socket 0 00:04:26.438 EAL: Detected lcore 22 as core 28 on socket 0 00:04:26.438 EAL: Detected lcore 23 as core 29 on socket 0 00:04:26.438 EAL: Detected lcore 24 as core 0 on socket 1 00:04:26.438 EAL: Detected lcore 25 as core 1 on socket 1 00:04:26.438 EAL: Detected lcore 26 as core 2 on socket 1 00:04:26.438 EAL: Detected lcore 27 as core 3 on socket 1 00:04:26.438 EAL: Detected lcore 28 as core 4 on socket 1 00:04:26.438 EAL: Detected lcore 29 as core 5 on socket 1 00:04:26.438 EAL: Detected lcore 30 as core 6 on socket 1 00:04:26.438 EAL: Detected lcore 31 as core 8 on socket 1 00:04:26.438 EAL: Detected lcore 32 as core 9 on socket 1 00:04:26.438 EAL: Detected lcore 33 as core 10 on socket 1 00:04:26.438 EAL: Detected lcore 34 as core 11 on socket 1 00:04:26.438 EAL: Detected lcore 35 as core 12 on socket 1 00:04:26.438 EAL: Detected lcore 36 as core 13 on socket 1 00:04:26.438 EAL: Detected lcore 37 as core 16 on socket 1 00:04:26.438 EAL: Detected lcore 38 as core 17 on socket 1 00:04:26.438 EAL: Detected lcore 39 as core 18 on socket 1 00:04:26.438 EAL: Detected lcore 40 as core 19 on socket 1 00:04:26.438 EAL: Detected lcore 41 as core 20 on socket 1 00:04:26.438 EAL: Detected lcore 42 as core 21 on socket 1 00:04:26.438 EAL: Detected lcore 43 as core 25 on socket 1 00:04:26.438 EAL: Detected lcore 44 as core 26 on socket 1 00:04:26.438 EAL: Detected lcore 45 as core 27 on socket 1 00:04:26.438 EAL: Detected lcore 46 as core 28 on socket 1 00:04:26.438 EAL: Detected lcore 47 as core 29 on socket 1 00:04:26.438 EAL: Detected lcore 48 as core 0 on socket 0 00:04:26.438 EAL: Detected lcore 49 as core 1 on socket 0 00:04:26.438 EAL: Detected lcore 50 as core 2 on socket 0 00:04:26.438 EAL: Detected lcore 51 as core 3 on socket 0 00:04:26.438 EAL: Detected lcore 52 as core 4 on socket 0 00:04:26.438 EAL: Detected lcore 53 as core 5 on socket 0 00:04:26.438 EAL: Detected lcore 54 as core 6 on socket 0 00:04:26.438 EAL: Detected lcore 55 as core 8 on socket 0 00:04:26.438 EAL: Detected lcore 56 as core 9 on socket 0 00:04:26.438 EAL: Detected lcore 57 as core 10 on socket 0 00:04:26.438 EAL: Detected lcore 58 as core 11 on socket 0 00:04:26.438 EAL: Detected lcore 59 as core 12 on socket 0 00:04:26.438 EAL: Detected lcore 60 as core 13 on socket 0 00:04:26.438 EAL: Detected lcore 61 as core 16 on socket 0 00:04:26.438 EAL: Detected lcore 62 as core 17 on socket 0 00:04:26.438 EAL: Detected lcore 63 as core 18 on socket 0 00:04:26.438 EAL: Detected lcore 64 as core 19 on socket 0 00:04:26.438 EAL: Detected lcore 65 as core 20 on socket 0 00:04:26.438 EAL: Detected lcore 66 as core 21 on socket 0 00:04:26.438 EAL: Detected lcore 67 as core 25 on socket 0 00:04:26.438 EAL: Detected lcore 68 as core 26 on socket 0 00:04:26.438 EAL: Detected lcore 69 as core 27 on socket 0 00:04:26.438 EAL: Detected lcore 70 as core 28 on socket 0 00:04:26.438 EAL: Detected lcore 71 as core 29 on socket 0 00:04:26.438 EAL: Detected lcore 72 as core 0 on socket 1 00:04:26.438 EAL: Detected lcore 73 as core 1 on socket 1 00:04:26.438 EAL: Detected lcore 74 as core 2 on socket 1 00:04:26.438 EAL: Detected lcore 75 as core 3 on socket 1 00:04:26.438 EAL: Detected lcore 76 as core 4 on socket 1 00:04:26.438 EAL: Detected lcore 77 as core 5 on socket 1 00:04:26.439 EAL: Detected lcore 78 as core 6 on socket 1 00:04:26.439 EAL: Detected lcore 79 as core 8 on socket 1 00:04:26.439 EAL: Detected lcore 80 as core 9 on socket 1 00:04:26.439 EAL: Detected lcore 81 as core 10 on socket 1 00:04:26.439 EAL: Detected lcore 82 as core 11 on socket 1 00:04:26.439 EAL: Detected lcore 83 as core 12 on socket 1 00:04:26.439 EAL: Detected lcore 84 as core 13 on socket 1 00:04:26.439 EAL: Detected lcore 85 as core 16 on socket 1 00:04:26.439 EAL: Detected lcore 86 as core 17 on socket 1 00:04:26.439 EAL: Detected lcore 87 as core 18 on socket 1 00:04:26.439 EAL: Detected lcore 88 as core 19 on socket 1 00:04:26.439 EAL: Detected lcore 89 as core 20 on socket 1 00:04:26.439 EAL: Detected lcore 90 as core 21 on socket 1 00:04:26.439 EAL: Detected lcore 91 as core 25 on socket 1 00:04:26.439 EAL: Detected lcore 92 as core 26 on socket 1 00:04:26.439 EAL: Detected lcore 93 as core 27 on socket 1 00:04:26.439 EAL: Detected lcore 94 as core 28 on socket 1 00:04:26.439 EAL: Detected lcore 95 as core 29 on socket 1 00:04:26.439 EAL: Maximum logical cores by configuration: 128 00:04:26.439 EAL: Detected CPU lcores: 96 00:04:26.439 EAL: Detected NUMA nodes: 2 00:04:26.439 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:26.439 EAL: Detected shared linkage of DPDK 00:04:26.439 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:26.439 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:26.439 EAL: Registered [vdev] bus. 00:04:26.439 EAL: bus.vdev log level changed from disabled to notice 00:04:26.439 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:26.439 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:26.439 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:26.439 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:26.439 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:26.439 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:26.439 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:26.439 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:26.439 EAL: No shared files mode enabled, IPC will be disabled 00:04:26.439 EAL: No shared files mode enabled, IPC is disabled 00:04:26.439 EAL: Bus pci wants IOVA as 'DC' 00:04:26.439 EAL: Bus vdev wants IOVA as 'DC' 00:04:26.439 EAL: Buses did not request a specific IOVA mode. 00:04:26.439 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:26.439 EAL: Selected IOVA mode 'VA' 00:04:26.439 EAL: Probing VFIO support... 00:04:26.439 EAL: IOMMU type 1 (Type 1) is supported 00:04:26.439 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:26.439 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:26.439 EAL: VFIO support initialized 00:04:26.439 EAL: Ask a virtual area of 0x2e000 bytes 00:04:26.439 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:26.439 EAL: Setting up physically contiguous memory... 00:04:26.439 EAL: Setting maximum number of open files to 524288 00:04:26.439 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:26.439 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:26.439 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:26.439 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.439 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:26.439 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.439 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.439 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:26.439 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:26.439 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.439 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:26.439 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.439 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.439 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:26.439 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:26.439 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.439 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:26.439 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.439 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.439 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:26.439 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:26.439 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.439 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:26.439 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.439 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.439 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:26.439 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:26.439 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:26.439 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.439 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:26.439 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:26.439 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.439 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:26.439 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:26.439 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.439 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:26.439 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:26.439 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.439 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:26.439 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:26.439 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.439 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:26.439 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:26.439 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.439 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:26.439 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:26.439 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.439 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:26.439 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:26.439 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.439 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:26.439 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:26.439 EAL: Hugepages will be freed exactly as allocated. 00:04:26.439 EAL: No shared files mode enabled, IPC is disabled 00:04:26.439 EAL: No shared files mode enabled, IPC is disabled 00:04:26.439 EAL: TSC frequency is ~2100000 KHz 00:04:26.439 EAL: Main lcore 0 is ready (tid=7f1f0bb78a00;cpuset=[0]) 00:04:26.439 EAL: Trying to obtain current memory policy. 00:04:26.439 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.439 EAL: Restoring previous memory policy: 0 00:04:26.439 EAL: request: mp_malloc_sync 00:04:26.439 EAL: No shared files mode enabled, IPC is disabled 00:04:26.439 EAL: Heap on socket 0 was expanded by 2MB 00:04:26.439 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:26.439 EAL: probe driver: 8086:37d2 net_i40e 00:04:26.439 EAL: Not managed by a supported kernel driver, skipped 00:04:26.439 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:26.439 EAL: probe driver: 8086:37d2 net_i40e 00:04:26.439 EAL: Not managed by a supported kernel driver, skipped 00:04:26.439 EAL: No shared files mode enabled, IPC is disabled 00:04:26.698 EAL: No shared files mode enabled, IPC is disabled 00:04:26.698 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:26.698 EAL: Mem event callback 'spdk:(nil)' registered 00:04:26.698 00:04:26.698 00:04:26.698 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.698 http://cunit.sourceforge.net/ 00:04:26.698 00:04:26.698 00:04:26.698 Suite: components_suite 00:04:26.698 Test: vtophys_malloc_test ...passed 00:04:26.699 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:26.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.699 EAL: Restoring previous memory policy: 4 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was expanded by 4MB 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was shrunk by 4MB 00:04:26.699 EAL: Trying to obtain current memory policy. 00:04:26.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.699 EAL: Restoring previous memory policy: 4 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was expanded by 6MB 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was shrunk by 6MB 00:04:26.699 EAL: Trying to obtain current memory policy. 00:04:26.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.699 EAL: Restoring previous memory policy: 4 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was expanded by 10MB 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was shrunk by 10MB 00:04:26.699 EAL: Trying to obtain current memory policy. 00:04:26.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.699 EAL: Restoring previous memory policy: 4 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was expanded by 18MB 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was shrunk by 18MB 00:04:26.699 EAL: Trying to obtain current memory policy. 00:04:26.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.699 EAL: Restoring previous memory policy: 4 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was expanded by 34MB 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was shrunk by 34MB 00:04:26.699 EAL: Trying to obtain current memory policy. 00:04:26.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.699 EAL: Restoring previous memory policy: 4 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was expanded by 66MB 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was shrunk by 66MB 00:04:26.699 EAL: Trying to obtain current memory policy. 00:04:26.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.699 EAL: Restoring previous memory policy: 4 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was expanded by 130MB 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was shrunk by 130MB 00:04:26.699 EAL: Trying to obtain current memory policy. 00:04:26.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.699 EAL: Restoring previous memory policy: 4 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.699 EAL: request: mp_malloc_sync 00:04:26.699 EAL: No shared files mode enabled, IPC is disabled 00:04:26.699 EAL: Heap on socket 0 was expanded by 258MB 00:04:26.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.958 EAL: request: mp_malloc_sync 00:04:26.958 EAL: No shared files mode enabled, IPC is disabled 00:04:26.958 EAL: Heap on socket 0 was shrunk by 258MB 00:04:26.958 EAL: Trying to obtain current memory policy. 00:04:26.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.958 EAL: Restoring previous memory policy: 4 00:04:26.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.958 EAL: request: mp_malloc_sync 00:04:26.958 EAL: No shared files mode enabled, IPC is disabled 00:04:26.958 EAL: Heap on socket 0 was expanded by 514MB 00:04:26.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.217 EAL: request: mp_malloc_sync 00:04:27.217 EAL: No shared files mode enabled, IPC is disabled 00:04:27.217 EAL: Heap on socket 0 was shrunk by 514MB 00:04:27.217 EAL: Trying to obtain current memory policy. 00:04:27.217 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.217 EAL: Restoring previous memory policy: 4 00:04:27.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.217 EAL: request: mp_malloc_sync 00:04:27.217 EAL: No shared files mode enabled, IPC is disabled 00:04:27.217 EAL: Heap on socket 0 was expanded by 1026MB 00:04:27.476 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.476 EAL: request: mp_malloc_sync 00:04:27.476 EAL: No shared files mode enabled, IPC is disabled 00:04:27.476 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:27.476 passed 00:04:27.476 00:04:27.476 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.476 suites 1 1 n/a 0 0 00:04:27.476 tests 2 2 2 0 0 00:04:27.476 asserts 497 497 497 0 n/a 00:04:27.476 00:04:27.476 Elapsed time = 0.968 seconds 00:04:27.476 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.476 EAL: request: mp_malloc_sync 00:04:27.476 EAL: No shared files mode enabled, IPC is disabled 00:04:27.476 EAL: Heap on socket 0 was shrunk by 2MB 00:04:27.476 EAL: No shared files mode enabled, IPC is disabled 00:04:27.476 EAL: No shared files mode enabled, IPC is disabled 00:04:27.476 EAL: No shared files mode enabled, IPC is disabled 00:04:27.736 00:04:27.736 real 0m1.098s 00:04:27.736 user 0m0.647s 00:04:27.736 sys 0m0.426s 00:04:27.736 05:20:27 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.736 05:20:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:27.736 ************************************ 00:04:27.736 END TEST env_vtophys 00:04:27.736 ************************************ 00:04:27.736 05:20:27 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:27.736 05:20:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.736 05:20:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.736 05:20:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.736 ************************************ 00:04:27.736 START TEST env_pci 00:04:27.736 ************************************ 00:04:27.736 05:20:27 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:27.736 00:04:27.736 00:04:27.736 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.736 http://cunit.sourceforge.net/ 00:04:27.736 00:04:27.736 00:04:27.736 Suite: pci 00:04:27.736 Test: pci_hook ...[2024-12-13 05:20:27.580822] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 103430 has claimed it 00:04:27.736 EAL: Cannot find device (10000:00:01.0) 00:04:27.736 EAL: Failed to attach device on primary process 00:04:27.736 passed 00:04:27.736 00:04:27.736 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.736 suites 1 1 n/a 0 0 00:04:27.736 tests 1 1 1 0 0 00:04:27.736 asserts 25 25 25 0 n/a 00:04:27.736 00:04:27.736 Elapsed time = 0.026 seconds 00:04:27.736 00:04:27.736 real 0m0.044s 00:04:27.736 user 0m0.013s 00:04:27.736 sys 0m0.030s 00:04:27.736 05:20:27 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.736 05:20:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:27.736 ************************************ 00:04:27.736 END TEST env_pci 00:04:27.736 ************************************ 00:04:27.736 05:20:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:27.736 05:20:27 env -- env/env.sh@15 -- # uname 00:04:27.736 05:20:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:27.736 05:20:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:27.736 05:20:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.736 05:20:27 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:27.736 05:20:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.736 05:20:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.736 ************************************ 00:04:27.736 START TEST env_dpdk_post_init 00:04:27.736 ************************************ 00:04:27.736 05:20:27 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.736 EAL: Detected CPU lcores: 96 00:04:27.736 EAL: Detected NUMA nodes: 2 00:04:27.736 EAL: Detected shared linkage of DPDK 00:04:27.736 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.736 EAL: Selected IOVA mode 'VA' 00:04:27.736 EAL: VFIO support initialized 00:04:27.736 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.996 EAL: Using IOMMU type 1 (Type 1) 00:04:27.996 EAL: Ignore mapping IO port bar(1) 00:04:27.996 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:27.996 EAL: Ignore mapping IO port bar(1) 00:04:27.996 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:27.996 EAL: Ignore mapping IO port bar(1) 00:04:27.996 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:27.996 EAL: Ignore mapping IO port bar(1) 00:04:27.996 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:27.996 EAL: Ignore mapping IO port bar(1) 00:04:27.996 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:27.996 EAL: Ignore mapping IO port bar(1) 00:04:27.996 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:27.996 EAL: Ignore mapping IO port bar(1) 00:04:27.996 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:27.996 EAL: Ignore mapping IO port bar(1) 00:04:27.996 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:28.934 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:28.934 EAL: Ignore mapping IO port bar(1) 00:04:28.934 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:28.934 EAL: Ignore mapping IO port bar(1) 00:04:28.934 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:28.934 EAL: Ignore mapping IO port bar(1) 00:04:28.934 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:28.934 EAL: Ignore mapping IO port bar(1) 00:04:28.934 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:28.934 EAL: Ignore mapping IO port bar(1) 00:04:28.934 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:28.934 EAL: Ignore mapping IO port bar(1) 00:04:28.934 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:28.934 EAL: Ignore mapping IO port bar(1) 00:04:28.934 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:28.934 EAL: Ignore mapping IO port bar(1) 00:04:28.934 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:32.223 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:32.223 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:32.223 Starting DPDK initialization... 00:04:32.223 Starting SPDK post initialization... 00:04:32.223 SPDK NVMe probe 00:04:32.223 Attaching to 0000:5e:00.0 00:04:32.223 Attached to 0000:5e:00.0 00:04:32.223 Cleaning up... 00:04:32.223 00:04:32.223 real 0m4.348s 00:04:32.223 user 0m3.269s 00:04:32.223 sys 0m0.148s 00:04:32.223 05:20:32 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.223 05:20:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.223 ************************************ 00:04:32.223 END TEST env_dpdk_post_init 00:04:32.223 ************************************ 00:04:32.223 05:20:32 env -- env/env.sh@26 -- # uname 00:04:32.223 05:20:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:32.223 05:20:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.223 05:20:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.223 05:20:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.223 05:20:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.223 ************************************ 00:04:32.223 START TEST env_mem_callbacks 00:04:32.223 ************************************ 00:04:32.223 05:20:32 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.223 EAL: Detected CPU lcores: 96 00:04:32.223 EAL: Detected NUMA nodes: 2 00:04:32.223 EAL: Detected shared linkage of DPDK 00:04:32.223 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.223 EAL: Selected IOVA mode 'VA' 00:04:32.223 EAL: VFIO support initialized 00:04:32.223 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.223 00:04:32.223 00:04:32.223 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.223 http://cunit.sourceforge.net/ 00:04:32.223 00:04:32.223 00:04:32.223 Suite: memory 00:04:32.223 Test: test ... 00:04:32.223 register 0x200000200000 2097152 00:04:32.223 malloc 3145728 00:04:32.223 register 0x200000400000 4194304 00:04:32.223 buf 0x200000500000 len 3145728 PASSED 00:04:32.223 malloc 64 00:04:32.223 buf 0x2000004fff40 len 64 PASSED 00:04:32.223 malloc 4194304 00:04:32.223 register 0x200000800000 6291456 00:04:32.223 buf 0x200000a00000 len 4194304 PASSED 00:04:32.223 free 0x200000500000 3145728 00:04:32.223 free 0x2000004fff40 64 00:04:32.223 unregister 0x200000400000 4194304 PASSED 00:04:32.223 free 0x200000a00000 4194304 00:04:32.223 unregister 0x200000800000 6291456 PASSED 00:04:32.223 malloc 8388608 00:04:32.223 register 0x200000400000 10485760 00:04:32.223 buf 0x200000600000 len 8388608 PASSED 00:04:32.223 free 0x200000600000 8388608 00:04:32.223 unregister 0x200000400000 10485760 PASSED 00:04:32.223 passed 00:04:32.223 00:04:32.223 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.223 suites 1 1 n/a 0 0 00:04:32.223 tests 1 1 1 0 0 00:04:32.223 asserts 15 15 15 0 n/a 00:04:32.223 00:04:32.223 Elapsed time = 0.008 seconds 00:04:32.223 00:04:32.223 real 0m0.056s 00:04:32.223 user 0m0.017s 00:04:32.223 sys 0m0.039s 00:04:32.223 05:20:32 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.223 05:20:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:32.223 ************************************ 00:04:32.223 END TEST env_mem_callbacks 00:04:32.223 ************************************ 00:04:32.223 00:04:32.223 real 0m6.223s 00:04:32.223 user 0m4.311s 00:04:32.223 sys 0m0.991s 00:04:32.223 05:20:32 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.223 05:20:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.223 ************************************ 00:04:32.223 END TEST env 00:04:32.223 ************************************ 00:04:32.223 05:20:32 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:32.483 05:20:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.483 05:20:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.483 05:20:32 -- common/autotest_common.sh@10 -- # set +x 00:04:32.483 ************************************ 00:04:32.483 START TEST rpc 00:04:32.483 ************************************ 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:32.483 * Looking for test storage... 00:04:32.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.483 05:20:32 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.483 05:20:32 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.483 05:20:32 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.483 05:20:32 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.483 05:20:32 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.483 05:20:32 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.483 05:20:32 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.483 05:20:32 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.483 05:20:32 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.483 05:20:32 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.483 05:20:32 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.483 05:20:32 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:32.483 05:20:32 rpc -- scripts/common.sh@345 -- # : 1 00:04:32.483 05:20:32 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.483 05:20:32 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.483 05:20:32 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:32.483 05:20:32 rpc -- scripts/common.sh@353 -- # local d=1 00:04:32.483 05:20:32 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.483 05:20:32 rpc -- scripts/common.sh@355 -- # echo 1 00:04:32.483 05:20:32 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.483 05:20:32 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:32.483 05:20:32 rpc -- scripts/common.sh@353 -- # local d=2 00:04:32.483 05:20:32 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.483 05:20:32 rpc -- scripts/common.sh@355 -- # echo 2 00:04:32.483 05:20:32 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.483 05:20:32 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.483 05:20:32 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.483 05:20:32 rpc -- scripts/common.sh@368 -- # return 0 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.483 --rc genhtml_branch_coverage=1 00:04:32.483 --rc genhtml_function_coverage=1 00:04:32.483 --rc genhtml_legend=1 00:04:32.483 --rc geninfo_all_blocks=1 00:04:32.483 --rc geninfo_unexecuted_blocks=1 00:04:32.483 00:04:32.483 ' 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.483 --rc genhtml_branch_coverage=1 00:04:32.483 --rc genhtml_function_coverage=1 00:04:32.483 --rc genhtml_legend=1 00:04:32.483 --rc geninfo_all_blocks=1 00:04:32.483 --rc geninfo_unexecuted_blocks=1 00:04:32.483 00:04:32.483 ' 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.483 --rc genhtml_branch_coverage=1 00:04:32.483 --rc genhtml_function_coverage=1 00:04:32.483 --rc genhtml_legend=1 00:04:32.483 --rc geninfo_all_blocks=1 00:04:32.483 --rc geninfo_unexecuted_blocks=1 00:04:32.483 00:04:32.483 ' 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.483 --rc genhtml_branch_coverage=1 00:04:32.483 --rc genhtml_function_coverage=1 00:04:32.483 --rc genhtml_legend=1 00:04:32.483 --rc geninfo_all_blocks=1 00:04:32.483 --rc geninfo_unexecuted_blocks=1 00:04:32.483 00:04:32.483 ' 00:04:32.483 05:20:32 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:32.483 05:20:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=104262 00:04:32.483 05:20:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.483 05:20:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 104262 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@835 -- # '[' -z 104262 ']' 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.483 05:20:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.483 [2024-12-13 05:20:32.493153] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:32.483 [2024-12-13 05:20:32.493199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104262 ] 00:04:32.742 [2024-12-13 05:20:32.572163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.742 [2024-12-13 05:20:32.594301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:32.742 [2024-12-13 05:20:32.594336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 104262' to capture a snapshot of events at runtime. 00:04:32.742 [2024-12-13 05:20:32.594343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:32.742 [2024-12-13 05:20:32.594348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:32.742 [2024-12-13 05:20:32.594353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid104262 for offline analysis/debug. 00:04:32.742 [2024-12-13 05:20:32.594848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.002 05:20:32 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.002 05:20:32 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:33.002 05:20:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:33.002 05:20:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:33.002 05:20:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:33.002 05:20:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:33.002 05:20:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.002 05:20:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.002 05:20:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.002 ************************************ 00:04:33.002 START TEST rpc_integrity 00:04:33.002 ************************************ 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:33.002 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.002 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.002 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:33.002 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.002 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.002 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:33.002 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.002 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.002 { 00:04:33.002 "name": "Malloc0", 00:04:33.002 "aliases": [ 00:04:33.002 "e8630181-c638-4b93-9030-b47ee1b0d128" 00:04:33.002 ], 00:04:33.002 "product_name": "Malloc disk", 00:04:33.002 "block_size": 512, 00:04:33.002 "num_blocks": 16384, 00:04:33.002 "uuid": "e8630181-c638-4b93-9030-b47ee1b0d128", 00:04:33.002 "assigned_rate_limits": { 00:04:33.002 "rw_ios_per_sec": 0, 00:04:33.002 "rw_mbytes_per_sec": 0, 00:04:33.002 "r_mbytes_per_sec": 0, 00:04:33.002 "w_mbytes_per_sec": 0 00:04:33.002 }, 00:04:33.002 "claimed": false, 00:04:33.002 "zoned": false, 00:04:33.002 "supported_io_types": { 00:04:33.002 "read": true, 00:04:33.002 "write": true, 00:04:33.002 "unmap": true, 00:04:33.002 "flush": true, 00:04:33.002 "reset": true, 00:04:33.002 "nvme_admin": false, 00:04:33.002 "nvme_io": false, 00:04:33.002 "nvme_io_md": false, 00:04:33.002 "write_zeroes": true, 00:04:33.002 "zcopy": true, 00:04:33.002 "get_zone_info": false, 00:04:33.002 "zone_management": false, 00:04:33.002 "zone_append": false, 00:04:33.002 "compare": false, 00:04:33.002 "compare_and_write": false, 00:04:33.002 "abort": true, 00:04:33.002 "seek_hole": false, 00:04:33.002 "seek_data": false, 00:04:33.002 "copy": true, 00:04:33.002 "nvme_iov_md": false 00:04:33.002 }, 00:04:33.002 "memory_domains": [ 00:04:33.002 { 00:04:33.002 "dma_device_id": "system", 00:04:33.002 "dma_device_type": 1 00:04:33.002 }, 00:04:33.002 { 00:04:33.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.002 "dma_device_type": 2 00:04:33.002 } 00:04:33.002 ], 00:04:33.002 "driver_specific": {} 00:04:33.002 } 00:04:33.002 ]' 00:04:33.002 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.002 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.002 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.002 [2024-12-13 05:20:32.959588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:33.002 [2024-12-13 05:20:32.959615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.002 [2024-12-13 05:20:32.959627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2654a00 00:04:33.002 [2024-12-13 05:20:32.959633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.002 [2024-12-13 05:20:32.960677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.002 [2024-12-13 05:20:32.960697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.002 Passthru0 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.002 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.002 05:20:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.002 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.002 { 00:04:33.002 "name": "Malloc0", 00:04:33.002 "aliases": [ 00:04:33.003 "e8630181-c638-4b93-9030-b47ee1b0d128" 00:04:33.003 ], 00:04:33.003 "product_name": "Malloc disk", 00:04:33.003 "block_size": 512, 00:04:33.003 "num_blocks": 16384, 00:04:33.003 "uuid": "e8630181-c638-4b93-9030-b47ee1b0d128", 00:04:33.003 "assigned_rate_limits": { 00:04:33.003 "rw_ios_per_sec": 0, 00:04:33.003 "rw_mbytes_per_sec": 0, 00:04:33.003 "r_mbytes_per_sec": 0, 00:04:33.003 "w_mbytes_per_sec": 0 00:04:33.003 }, 00:04:33.003 "claimed": true, 00:04:33.003 "claim_type": "exclusive_write", 00:04:33.003 "zoned": false, 00:04:33.003 "supported_io_types": { 00:04:33.003 "read": true, 00:04:33.003 "write": true, 00:04:33.003 "unmap": true, 00:04:33.003 "flush": true, 00:04:33.003 "reset": true, 00:04:33.003 "nvme_admin": false, 00:04:33.003 "nvme_io": false, 00:04:33.003 "nvme_io_md": false, 00:04:33.003 "write_zeroes": true, 00:04:33.003 "zcopy": true, 00:04:33.003 "get_zone_info": false, 00:04:33.003 "zone_management": false, 00:04:33.003 "zone_append": false, 00:04:33.003 "compare": false, 00:04:33.003 "compare_and_write": false, 00:04:33.003 "abort": true, 00:04:33.003 "seek_hole": false, 00:04:33.003 "seek_data": false, 00:04:33.003 "copy": true, 00:04:33.003 "nvme_iov_md": false 00:04:33.003 }, 00:04:33.003 "memory_domains": [ 00:04:33.003 { 00:04:33.003 "dma_device_id": "system", 00:04:33.003 "dma_device_type": 1 00:04:33.003 }, 00:04:33.003 { 00:04:33.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.003 "dma_device_type": 2 00:04:33.003 } 00:04:33.003 ], 00:04:33.003 "driver_specific": {} 00:04:33.003 }, 00:04:33.003 { 00:04:33.003 "name": "Passthru0", 00:04:33.003 "aliases": [ 00:04:33.003 "51b9e0f0-c21b-5bec-9faa-a9a020f584b3" 00:04:33.003 ], 00:04:33.003 "product_name": "passthru", 00:04:33.003 "block_size": 512, 00:04:33.003 "num_blocks": 16384, 00:04:33.003 "uuid": "51b9e0f0-c21b-5bec-9faa-a9a020f584b3", 00:04:33.003 "assigned_rate_limits": { 00:04:33.003 "rw_ios_per_sec": 0, 00:04:33.003 "rw_mbytes_per_sec": 0, 00:04:33.003 "r_mbytes_per_sec": 0, 00:04:33.003 "w_mbytes_per_sec": 0 00:04:33.003 }, 00:04:33.003 "claimed": false, 00:04:33.003 "zoned": false, 00:04:33.003 "supported_io_types": { 00:04:33.003 "read": true, 00:04:33.003 "write": true, 00:04:33.003 "unmap": true, 00:04:33.003 "flush": true, 00:04:33.003 "reset": true, 00:04:33.003 "nvme_admin": false, 00:04:33.003 "nvme_io": false, 00:04:33.003 "nvme_io_md": false, 00:04:33.003 "write_zeroes": true, 00:04:33.003 "zcopy": true, 00:04:33.003 "get_zone_info": false, 00:04:33.003 "zone_management": false, 00:04:33.003 "zone_append": false, 00:04:33.003 "compare": false, 00:04:33.003 "compare_and_write": false, 00:04:33.003 "abort": true, 00:04:33.003 "seek_hole": false, 00:04:33.003 "seek_data": false, 00:04:33.003 "copy": true, 00:04:33.003 "nvme_iov_md": false 00:04:33.003 }, 00:04:33.003 "memory_domains": [ 00:04:33.003 { 00:04:33.003 "dma_device_id": "system", 00:04:33.003 "dma_device_type": 1 00:04:33.003 }, 00:04:33.003 { 00:04:33.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.003 "dma_device_type": 2 00:04:33.003 } 00:04:33.003 ], 00:04:33.003 "driver_specific": { 00:04:33.003 "passthru": { 00:04:33.003 "name": "Passthru0", 00:04:33.003 "base_bdev_name": "Malloc0" 00:04:33.003 } 00:04:33.003 } 00:04:33.003 } 00:04:33.003 ]' 00:04:33.003 05:20:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:33.262 05:20:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.262 05:20:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.262 05:20:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.262 05:20:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.262 05:20:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.262 05:20:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:33.262 05:20:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.262 05:20:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.262 05:20:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.262 05:20:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.262 05:20:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.262 05:20:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.262 05:20:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.262 05:20:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.262 05:20:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:33.262 05:20:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:33.262 00:04:33.262 real 0m0.275s 00:04:33.262 user 0m0.179s 00:04:33.262 sys 0m0.033s 00:04:33.262 05:20:33 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.262 05:20:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.262 ************************************ 00:04:33.262 END TEST rpc_integrity 00:04:33.262 ************************************ 00:04:33.262 05:20:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:33.262 05:20:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.262 05:20:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.262 05:20:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.262 ************************************ 00:04:33.262 START TEST rpc_plugins 00:04:33.262 ************************************ 00:04:33.262 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:33.262 05:20:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:33.262 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.262 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.262 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.262 05:20:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:33.262 05:20:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:33.262 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.262 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.262 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.262 05:20:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:33.262 { 00:04:33.262 "name": "Malloc1", 00:04:33.262 "aliases": [ 00:04:33.262 "f542f29b-b8cb-4a9f-9e27-b31d5ec752d0" 00:04:33.262 ], 00:04:33.262 "product_name": "Malloc disk", 00:04:33.262 "block_size": 4096, 00:04:33.262 "num_blocks": 256, 00:04:33.262 "uuid": "f542f29b-b8cb-4a9f-9e27-b31d5ec752d0", 00:04:33.262 "assigned_rate_limits": { 00:04:33.262 "rw_ios_per_sec": 0, 00:04:33.262 "rw_mbytes_per_sec": 0, 00:04:33.262 "r_mbytes_per_sec": 0, 00:04:33.262 "w_mbytes_per_sec": 0 00:04:33.262 }, 00:04:33.262 "claimed": false, 00:04:33.262 "zoned": false, 00:04:33.262 "supported_io_types": { 00:04:33.262 "read": true, 00:04:33.262 "write": true, 00:04:33.262 "unmap": true, 00:04:33.262 "flush": true, 00:04:33.262 "reset": true, 00:04:33.262 "nvme_admin": false, 00:04:33.262 "nvme_io": false, 00:04:33.262 "nvme_io_md": false, 00:04:33.262 "write_zeroes": true, 00:04:33.262 "zcopy": true, 00:04:33.262 "get_zone_info": false, 00:04:33.262 "zone_management": false, 00:04:33.262 "zone_append": false, 00:04:33.262 "compare": false, 00:04:33.262 "compare_and_write": false, 00:04:33.262 "abort": true, 00:04:33.262 "seek_hole": false, 00:04:33.262 "seek_data": false, 00:04:33.262 "copy": true, 00:04:33.262 "nvme_iov_md": false 00:04:33.262 }, 00:04:33.262 "memory_domains": [ 00:04:33.262 { 00:04:33.262 "dma_device_id": "system", 00:04:33.262 "dma_device_type": 1 00:04:33.262 }, 00:04:33.262 { 00:04:33.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.262 "dma_device_type": 2 00:04:33.262 } 00:04:33.262 ], 00:04:33.262 "driver_specific": {} 00:04:33.262 } 00:04:33.262 ]' 00:04:33.262 05:20:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:33.262 05:20:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:33.262 05:20:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:33.262 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.262 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.262 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.262 05:20:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:33.262 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.262 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.522 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.522 05:20:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:33.522 05:20:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:33.522 05:20:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:33.522 00:04:33.522 real 0m0.151s 00:04:33.522 user 0m0.092s 00:04:33.522 sys 0m0.019s 00:04:33.522 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.522 05:20:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.522 ************************************ 00:04:33.522 END TEST rpc_plugins 00:04:33.522 ************************************ 00:04:33.522 05:20:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:33.522 05:20:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.522 05:20:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.522 05:20:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.522 ************************************ 00:04:33.522 START TEST rpc_trace_cmd_test 00:04:33.522 ************************************ 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:33.522 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid104262", 00:04:33.522 "tpoint_group_mask": "0x8", 00:04:33.522 "iscsi_conn": { 00:04:33.522 "mask": "0x2", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "scsi": { 00:04:33.522 "mask": "0x4", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "bdev": { 00:04:33.522 "mask": "0x8", 00:04:33.522 "tpoint_mask": "0xffffffffffffffff" 00:04:33.522 }, 00:04:33.522 "nvmf_rdma": { 00:04:33.522 "mask": "0x10", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "nvmf_tcp": { 00:04:33.522 "mask": "0x20", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "ftl": { 00:04:33.522 "mask": "0x40", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "blobfs": { 00:04:33.522 "mask": "0x80", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "dsa": { 00:04:33.522 "mask": "0x200", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "thread": { 00:04:33.522 "mask": "0x400", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "nvme_pcie": { 00:04:33.522 "mask": "0x800", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "iaa": { 00:04:33.522 "mask": "0x1000", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "nvme_tcp": { 00:04:33.522 "mask": "0x2000", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "bdev_nvme": { 00:04:33.522 "mask": "0x4000", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "sock": { 00:04:33.522 "mask": "0x8000", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "blob": { 00:04:33.522 "mask": "0x10000", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "bdev_raid": { 00:04:33.522 "mask": "0x20000", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 }, 00:04:33.522 "scheduler": { 00:04:33.522 "mask": "0x40000", 00:04:33.522 "tpoint_mask": "0x0" 00:04:33.522 } 00:04:33.522 }' 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:33.522 05:20:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:33.781 05:20:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:33.781 05:20:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:33.781 05:20:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:33.781 00:04:33.781 real 0m0.208s 00:04:33.781 user 0m0.172s 00:04:33.781 sys 0m0.028s 00:04:33.781 05:20:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.781 05:20:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:33.781 ************************************ 00:04:33.781 END TEST rpc_trace_cmd_test 00:04:33.781 ************************************ 00:04:33.781 05:20:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:33.781 05:20:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:33.781 05:20:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:33.781 05:20:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.781 05:20:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.781 05:20:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.781 ************************************ 00:04:33.781 START TEST rpc_daemon_integrity 00:04:33.781 ************************************ 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.781 { 00:04:33.781 "name": "Malloc2", 00:04:33.781 "aliases": [ 00:04:33.781 "02967bad-e360-45c1-b1d3-53ee92ace7c6" 00:04:33.781 ], 00:04:33.781 "product_name": "Malloc disk", 00:04:33.781 "block_size": 512, 00:04:33.781 "num_blocks": 16384, 00:04:33.781 "uuid": "02967bad-e360-45c1-b1d3-53ee92ace7c6", 00:04:33.781 "assigned_rate_limits": { 00:04:33.781 "rw_ios_per_sec": 0, 00:04:33.781 "rw_mbytes_per_sec": 0, 00:04:33.781 "r_mbytes_per_sec": 0, 00:04:33.781 "w_mbytes_per_sec": 0 00:04:33.781 }, 00:04:33.781 "claimed": false, 00:04:33.781 "zoned": false, 00:04:33.781 "supported_io_types": { 00:04:33.781 "read": true, 00:04:33.781 "write": true, 00:04:33.781 "unmap": true, 00:04:33.781 "flush": true, 00:04:33.781 "reset": true, 00:04:33.781 "nvme_admin": false, 00:04:33.781 "nvme_io": false, 00:04:33.781 "nvme_io_md": false, 00:04:33.781 "write_zeroes": true, 00:04:33.781 "zcopy": true, 00:04:33.781 "get_zone_info": false, 00:04:33.781 "zone_management": false, 00:04:33.781 "zone_append": false, 00:04:33.781 "compare": false, 00:04:33.781 "compare_and_write": false, 00:04:33.781 "abort": true, 00:04:33.781 "seek_hole": false, 00:04:33.781 "seek_data": false, 00:04:33.781 "copy": true, 00:04:33.781 "nvme_iov_md": false 00:04:33.781 }, 00:04:33.781 "memory_domains": [ 00:04:33.781 { 00:04:33.781 "dma_device_id": "system", 00:04:33.781 "dma_device_type": 1 00:04:33.781 }, 00:04:33.781 { 00:04:33.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.781 "dma_device_type": 2 00:04:33.781 } 00:04:33.781 ], 00:04:33.781 "driver_specific": {} 00:04:33.781 } 00:04:33.781 ]' 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.781 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.041 [2024-12-13 05:20:33.797862] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:34.041 [2024-12-13 05:20:33.797888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.041 [2024-12-13 05:20:33.797899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2512ac0 00:04:34.041 [2024-12-13 05:20:33.797905] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.041 [2024-12-13 05:20:33.798833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.041 [2024-12-13 05:20:33.798851] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.041 Passthru0 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.041 { 00:04:34.041 "name": "Malloc2", 00:04:34.041 "aliases": [ 00:04:34.041 "02967bad-e360-45c1-b1d3-53ee92ace7c6" 00:04:34.041 ], 00:04:34.041 "product_name": "Malloc disk", 00:04:34.041 "block_size": 512, 00:04:34.041 "num_blocks": 16384, 00:04:34.041 "uuid": "02967bad-e360-45c1-b1d3-53ee92ace7c6", 00:04:34.041 "assigned_rate_limits": { 00:04:34.041 "rw_ios_per_sec": 0, 00:04:34.041 "rw_mbytes_per_sec": 0, 00:04:34.041 "r_mbytes_per_sec": 0, 00:04:34.041 "w_mbytes_per_sec": 0 00:04:34.041 }, 00:04:34.041 "claimed": true, 00:04:34.041 "claim_type": "exclusive_write", 00:04:34.041 "zoned": false, 00:04:34.041 "supported_io_types": { 00:04:34.041 "read": true, 00:04:34.041 "write": true, 00:04:34.041 "unmap": true, 00:04:34.041 "flush": true, 00:04:34.041 "reset": true, 00:04:34.041 "nvme_admin": false, 00:04:34.041 "nvme_io": false, 00:04:34.041 "nvme_io_md": false, 00:04:34.041 "write_zeroes": true, 00:04:34.041 "zcopy": true, 00:04:34.041 "get_zone_info": false, 00:04:34.041 "zone_management": false, 00:04:34.041 "zone_append": false, 00:04:34.041 "compare": false, 00:04:34.041 "compare_and_write": false, 00:04:34.041 "abort": true, 00:04:34.041 "seek_hole": false, 00:04:34.041 "seek_data": false, 00:04:34.041 "copy": true, 00:04:34.041 "nvme_iov_md": false 00:04:34.041 }, 00:04:34.041 "memory_domains": [ 00:04:34.041 { 00:04:34.041 "dma_device_id": "system", 00:04:34.041 "dma_device_type": 1 00:04:34.041 }, 00:04:34.041 { 00:04:34.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.041 "dma_device_type": 2 00:04:34.041 } 00:04:34.041 ], 00:04:34.041 "driver_specific": {} 00:04:34.041 }, 00:04:34.041 { 00:04:34.041 "name": "Passthru0", 00:04:34.041 "aliases": [ 00:04:34.041 "965f6ddd-2c08-51f2-ac02-0c8d2c5a8167" 00:04:34.041 ], 00:04:34.041 "product_name": "passthru", 00:04:34.041 "block_size": 512, 00:04:34.041 "num_blocks": 16384, 00:04:34.041 "uuid": "965f6ddd-2c08-51f2-ac02-0c8d2c5a8167", 00:04:34.041 "assigned_rate_limits": { 00:04:34.041 "rw_ios_per_sec": 0, 00:04:34.041 "rw_mbytes_per_sec": 0, 00:04:34.041 "r_mbytes_per_sec": 0, 00:04:34.041 "w_mbytes_per_sec": 0 00:04:34.041 }, 00:04:34.041 "claimed": false, 00:04:34.041 "zoned": false, 00:04:34.041 "supported_io_types": { 00:04:34.041 "read": true, 00:04:34.041 "write": true, 00:04:34.041 "unmap": true, 00:04:34.041 "flush": true, 00:04:34.041 "reset": true, 00:04:34.041 "nvme_admin": false, 00:04:34.041 "nvme_io": false, 00:04:34.041 "nvme_io_md": false, 00:04:34.041 "write_zeroes": true, 00:04:34.041 "zcopy": true, 00:04:34.041 "get_zone_info": false, 00:04:34.041 "zone_management": false, 00:04:34.041 "zone_append": false, 00:04:34.041 "compare": false, 00:04:34.041 "compare_and_write": false, 00:04:34.041 "abort": true, 00:04:34.041 "seek_hole": false, 00:04:34.041 "seek_data": false, 00:04:34.041 "copy": true, 00:04:34.041 "nvme_iov_md": false 00:04:34.041 }, 00:04:34.041 "memory_domains": [ 00:04:34.041 { 00:04:34.041 "dma_device_id": "system", 00:04:34.041 "dma_device_type": 1 00:04:34.041 }, 00:04:34.041 { 00:04:34.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.041 "dma_device_type": 2 00:04:34.041 } 00:04:34.041 ], 00:04:34.041 "driver_specific": { 00:04:34.041 "passthru": { 00:04:34.041 "name": "Passthru0", 00:04:34.041 "base_bdev_name": "Malloc2" 00:04:34.041 } 00:04:34.041 } 00:04:34.041 } 00:04:34.041 ]' 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.041 05:20:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.041 00:04:34.041 real 0m0.281s 00:04:34.041 user 0m0.179s 00:04:34.042 sys 0m0.034s 00:04:34.042 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.042 05:20:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.042 ************************************ 00:04:34.042 END TEST rpc_daemon_integrity 00:04:34.042 ************************************ 00:04:34.042 05:20:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:34.042 05:20:33 rpc -- rpc/rpc.sh@84 -- # killprocess 104262 00:04:34.042 05:20:33 rpc -- common/autotest_common.sh@954 -- # '[' -z 104262 ']' 00:04:34.042 05:20:33 rpc -- common/autotest_common.sh@958 -- # kill -0 104262 00:04:34.042 05:20:33 rpc -- common/autotest_common.sh@959 -- # uname 00:04:34.042 05:20:33 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.042 05:20:33 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104262 00:04:34.042 05:20:34 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.042 05:20:34 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.042 05:20:34 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104262' 00:04:34.042 killing process with pid 104262 00:04:34.042 05:20:34 rpc -- common/autotest_common.sh@973 -- # kill 104262 00:04:34.042 05:20:34 rpc -- common/autotest_common.sh@978 -- # wait 104262 00:04:34.610 00:04:34.610 real 0m2.046s 00:04:34.610 user 0m2.636s 00:04:34.610 sys 0m0.665s 00:04:34.610 05:20:34 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.610 05:20:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.610 ************************************ 00:04:34.610 END TEST rpc 00:04:34.610 ************************************ 00:04:34.610 05:20:34 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:34.610 05:20:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.610 05:20:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.610 05:20:34 -- common/autotest_common.sh@10 -- # set +x 00:04:34.610 ************************************ 00:04:34.610 START TEST skip_rpc 00:04:34.610 ************************************ 00:04:34.610 05:20:34 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:34.610 * Looking for test storage... 00:04:34.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:34.610 05:20:34 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:34.610 05:20:34 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:34.610 05:20:34 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:34.610 05:20:34 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.610 05:20:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.611 05:20:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.611 05:20:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.611 05:20:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.611 05:20:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.611 05:20:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.611 05:20:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.611 05:20:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.611 05:20:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.611 05:20:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:34.611 05:20:34 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.611 05:20:34 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:34.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.611 --rc genhtml_branch_coverage=1 00:04:34.611 --rc genhtml_function_coverage=1 00:04:34.611 --rc genhtml_legend=1 00:04:34.611 --rc geninfo_all_blocks=1 00:04:34.611 --rc geninfo_unexecuted_blocks=1 00:04:34.611 00:04:34.611 ' 00:04:34.611 05:20:34 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:34.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.611 --rc genhtml_branch_coverage=1 00:04:34.611 --rc genhtml_function_coverage=1 00:04:34.611 --rc genhtml_legend=1 00:04:34.611 --rc geninfo_all_blocks=1 00:04:34.611 --rc geninfo_unexecuted_blocks=1 00:04:34.611 00:04:34.611 ' 00:04:34.611 05:20:34 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:34.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.611 --rc genhtml_branch_coverage=1 00:04:34.611 --rc genhtml_function_coverage=1 00:04:34.611 --rc genhtml_legend=1 00:04:34.611 --rc geninfo_all_blocks=1 00:04:34.611 --rc geninfo_unexecuted_blocks=1 00:04:34.611 00:04:34.611 ' 00:04:34.611 05:20:34 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:34.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.611 --rc genhtml_branch_coverage=1 00:04:34.611 --rc genhtml_function_coverage=1 00:04:34.611 --rc genhtml_legend=1 00:04:34.611 --rc geninfo_all_blocks=1 00:04:34.611 --rc geninfo_unexecuted_blocks=1 00:04:34.611 00:04:34.611 ' 00:04:34.611 05:20:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:34.611 05:20:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:34.611 05:20:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:34.611 05:20:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.611 05:20:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.611 05:20:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.611 ************************************ 00:04:34.611 START TEST skip_rpc 00:04:34.611 ************************************ 00:04:34.611 05:20:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:34.611 05:20:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=104885 00:04:34.611 05:20:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.611 05:20:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:34.611 05:20:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:34.870 [2024-12-13 05:20:34.644292] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:34.870 [2024-12-13 05:20:34.644328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104885 ] 00:04:34.870 [2024-12-13 05:20:34.714461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.870 [2024-12-13 05:20:34.736722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 104885 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 104885 ']' 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 104885 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104885 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104885' 00:04:40.141 killing process with pid 104885 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 104885 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 104885 00:04:40.141 00:04:40.141 real 0m5.363s 00:04:40.141 user 0m5.119s 00:04:40.141 sys 0m0.280s 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.141 05:20:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.141 ************************************ 00:04:40.141 END TEST skip_rpc 00:04:40.141 ************************************ 00:04:40.141 05:20:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:40.141 05:20:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.141 05:20:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.141 05:20:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.141 ************************************ 00:04:40.141 START TEST skip_rpc_with_json 00:04:40.141 ************************************ 00:04:40.141 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:40.141 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:40.141 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=105813 00:04:40.141 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.142 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.142 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 105813 00:04:40.142 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 105813 ']' 00:04:40.142 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.142 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.142 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.142 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.142 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.142 [2024-12-13 05:20:40.078952] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:40.142 [2024-12-13 05:20:40.078999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105813 ] 00:04:40.142 [2024-12-13 05:20:40.154509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.401 [2024-12-13 05:20:40.177584] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.401 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.401 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:40.401 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:40.401 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.401 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.401 [2024-12-13 05:20:40.383856] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:40.401 request: 00:04:40.401 { 00:04:40.401 "trtype": "tcp", 00:04:40.401 "method": "nvmf_get_transports", 00:04:40.401 "req_id": 1 00:04:40.401 } 00:04:40.401 Got JSON-RPC error response 00:04:40.401 response: 00:04:40.401 { 00:04:40.401 "code": -19, 00:04:40.401 "message": "No such device" 00:04:40.401 } 00:04:40.401 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:40.401 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:40.401 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.401 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.401 [2024-12-13 05:20:40.395967] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:40.401 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.401 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:40.401 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.401 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.660 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.660 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.660 { 00:04:40.660 "subsystems": [ 00:04:40.660 { 00:04:40.660 "subsystem": "fsdev", 00:04:40.660 "config": [ 00:04:40.660 { 00:04:40.660 "method": "fsdev_set_opts", 00:04:40.660 "params": { 00:04:40.660 "fsdev_io_pool_size": 65535, 00:04:40.660 "fsdev_io_cache_size": 256 00:04:40.660 } 00:04:40.660 } 00:04:40.660 ] 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "vfio_user_target", 00:04:40.660 "config": null 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "keyring", 00:04:40.660 "config": [] 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "iobuf", 00:04:40.660 "config": [ 00:04:40.660 { 00:04:40.660 "method": "iobuf_set_options", 00:04:40.660 "params": { 00:04:40.660 "small_pool_count": 8192, 00:04:40.660 "large_pool_count": 1024, 00:04:40.660 "small_bufsize": 8192, 00:04:40.660 "large_bufsize": 135168, 00:04:40.660 "enable_numa": false 00:04:40.660 } 00:04:40.660 } 00:04:40.660 ] 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "sock", 00:04:40.660 "config": [ 00:04:40.660 { 00:04:40.660 "method": "sock_set_default_impl", 00:04:40.660 "params": { 00:04:40.660 "impl_name": "posix" 00:04:40.660 } 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "method": "sock_impl_set_options", 00:04:40.660 "params": { 00:04:40.660 "impl_name": "ssl", 00:04:40.660 "recv_buf_size": 4096, 00:04:40.660 "send_buf_size": 4096, 00:04:40.660 "enable_recv_pipe": true, 00:04:40.660 "enable_quickack": false, 00:04:40.660 "enable_placement_id": 0, 00:04:40.660 "enable_zerocopy_send_server": true, 00:04:40.660 "enable_zerocopy_send_client": false, 00:04:40.660 "zerocopy_threshold": 0, 00:04:40.660 "tls_version": 0, 00:04:40.660 "enable_ktls": false 00:04:40.660 } 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "method": "sock_impl_set_options", 00:04:40.660 "params": { 00:04:40.660 "impl_name": "posix", 00:04:40.660 "recv_buf_size": 2097152, 00:04:40.660 "send_buf_size": 2097152, 00:04:40.660 "enable_recv_pipe": true, 00:04:40.660 "enable_quickack": false, 00:04:40.660 "enable_placement_id": 0, 00:04:40.660 "enable_zerocopy_send_server": true, 00:04:40.660 "enable_zerocopy_send_client": false, 00:04:40.660 "zerocopy_threshold": 0, 00:04:40.660 "tls_version": 0, 00:04:40.660 "enable_ktls": false 00:04:40.660 } 00:04:40.660 } 00:04:40.660 ] 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "vmd", 00:04:40.660 "config": [] 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "accel", 00:04:40.660 "config": [ 00:04:40.660 { 00:04:40.660 "method": "accel_set_options", 00:04:40.660 "params": { 00:04:40.660 "small_cache_size": 128, 00:04:40.660 "large_cache_size": 16, 00:04:40.660 "task_count": 2048, 00:04:40.660 "sequence_count": 2048, 00:04:40.660 "buf_count": 2048 00:04:40.660 } 00:04:40.660 } 00:04:40.660 ] 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "bdev", 00:04:40.660 "config": [ 00:04:40.660 { 00:04:40.660 "method": "bdev_set_options", 00:04:40.660 "params": { 00:04:40.660 "bdev_io_pool_size": 65535, 00:04:40.660 "bdev_io_cache_size": 256, 00:04:40.660 "bdev_auto_examine": true, 00:04:40.660 "iobuf_small_cache_size": 128, 00:04:40.660 "iobuf_large_cache_size": 16 00:04:40.660 } 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "method": "bdev_raid_set_options", 00:04:40.660 "params": { 00:04:40.660 "process_window_size_kb": 1024, 00:04:40.660 "process_max_bandwidth_mb_sec": 0 00:04:40.660 } 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "method": "bdev_iscsi_set_options", 00:04:40.660 "params": { 00:04:40.660 "timeout_sec": 30 00:04:40.660 } 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "method": "bdev_nvme_set_options", 00:04:40.660 "params": { 00:04:40.660 "action_on_timeout": "none", 00:04:40.660 "timeout_us": 0, 00:04:40.660 "timeout_admin_us": 0, 00:04:40.660 "keep_alive_timeout_ms": 10000, 00:04:40.660 "arbitration_burst": 0, 00:04:40.660 "low_priority_weight": 0, 00:04:40.660 "medium_priority_weight": 0, 00:04:40.660 "high_priority_weight": 0, 00:04:40.660 "nvme_adminq_poll_period_us": 10000, 00:04:40.660 "nvme_ioq_poll_period_us": 0, 00:04:40.660 "io_queue_requests": 0, 00:04:40.660 "delay_cmd_submit": true, 00:04:40.660 "transport_retry_count": 4, 00:04:40.660 "bdev_retry_count": 3, 00:04:40.660 "transport_ack_timeout": 0, 00:04:40.660 "ctrlr_loss_timeout_sec": 0, 00:04:40.660 "reconnect_delay_sec": 0, 00:04:40.660 "fast_io_fail_timeout_sec": 0, 00:04:40.660 "disable_auto_failback": false, 00:04:40.660 "generate_uuids": false, 00:04:40.660 "transport_tos": 0, 00:04:40.660 "nvme_error_stat": false, 00:04:40.660 "rdma_srq_size": 0, 00:04:40.660 "io_path_stat": false, 00:04:40.660 "allow_accel_sequence": false, 00:04:40.660 "rdma_max_cq_size": 0, 00:04:40.660 "rdma_cm_event_timeout_ms": 0, 00:04:40.660 "dhchap_digests": [ 00:04:40.660 "sha256", 00:04:40.660 "sha384", 00:04:40.660 "sha512" 00:04:40.660 ], 00:04:40.660 "dhchap_dhgroups": [ 00:04:40.660 "null", 00:04:40.660 "ffdhe2048", 00:04:40.660 "ffdhe3072", 00:04:40.660 "ffdhe4096", 00:04:40.660 "ffdhe6144", 00:04:40.660 "ffdhe8192" 00:04:40.660 ], 00:04:40.660 "rdma_umr_per_io": false 00:04:40.660 } 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "method": "bdev_nvme_set_hotplug", 00:04:40.660 "params": { 00:04:40.660 "period_us": 100000, 00:04:40.660 "enable": false 00:04:40.660 } 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "method": "bdev_wait_for_examine" 00:04:40.660 } 00:04:40.660 ] 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "scsi", 00:04:40.660 "config": null 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "scheduler", 00:04:40.660 "config": [ 00:04:40.660 { 00:04:40.660 "method": "framework_set_scheduler", 00:04:40.660 "params": { 00:04:40.660 "name": "static" 00:04:40.660 } 00:04:40.660 } 00:04:40.660 ] 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "vhost_scsi", 00:04:40.660 "config": [] 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "vhost_blk", 00:04:40.660 "config": [] 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "ublk", 00:04:40.660 "config": [] 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "nbd", 00:04:40.660 "config": [] 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "nvmf", 00:04:40.660 "config": [ 00:04:40.660 { 00:04:40.660 "method": "nvmf_set_config", 00:04:40.660 "params": { 00:04:40.660 "discovery_filter": "match_any", 00:04:40.660 "admin_cmd_passthru": { 00:04:40.660 "identify_ctrlr": false 00:04:40.660 }, 00:04:40.660 "dhchap_digests": [ 00:04:40.660 "sha256", 00:04:40.660 "sha384", 00:04:40.660 "sha512" 00:04:40.660 ], 00:04:40.660 "dhchap_dhgroups": [ 00:04:40.660 "null", 00:04:40.660 "ffdhe2048", 00:04:40.660 "ffdhe3072", 00:04:40.660 "ffdhe4096", 00:04:40.660 "ffdhe6144", 00:04:40.660 "ffdhe8192" 00:04:40.660 ] 00:04:40.660 } 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "method": "nvmf_set_max_subsystems", 00:04:40.660 "params": { 00:04:40.660 "max_subsystems": 1024 00:04:40.660 } 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "method": "nvmf_set_crdt", 00:04:40.660 "params": { 00:04:40.660 "crdt1": 0, 00:04:40.660 "crdt2": 0, 00:04:40.660 "crdt3": 0 00:04:40.660 } 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "method": "nvmf_create_transport", 00:04:40.660 "params": { 00:04:40.660 "trtype": "TCP", 00:04:40.660 "max_queue_depth": 128, 00:04:40.660 "max_io_qpairs_per_ctrlr": 127, 00:04:40.660 "in_capsule_data_size": 4096, 00:04:40.660 "max_io_size": 131072, 00:04:40.660 "io_unit_size": 131072, 00:04:40.660 "max_aq_depth": 128, 00:04:40.660 "num_shared_buffers": 511, 00:04:40.660 "buf_cache_size": 4294967295, 00:04:40.660 "dif_insert_or_strip": false, 00:04:40.660 "zcopy": false, 00:04:40.660 "c2h_success": true, 00:04:40.660 "sock_priority": 0, 00:04:40.660 "abort_timeout_sec": 1, 00:04:40.660 "ack_timeout": 0, 00:04:40.660 "data_wr_pool_size": 0 00:04:40.660 } 00:04:40.660 } 00:04:40.660 ] 00:04:40.660 }, 00:04:40.660 { 00:04:40.660 "subsystem": "iscsi", 00:04:40.661 "config": [ 00:04:40.661 { 00:04:40.661 "method": "iscsi_set_options", 00:04:40.661 "params": { 00:04:40.661 "node_base": "iqn.2016-06.io.spdk", 00:04:40.661 "max_sessions": 128, 00:04:40.661 "max_connections_per_session": 2, 00:04:40.661 "max_queue_depth": 64, 00:04:40.661 "default_time2wait": 2, 00:04:40.661 "default_time2retain": 20, 00:04:40.661 "first_burst_length": 8192, 00:04:40.661 "immediate_data": true, 00:04:40.661 "allow_duplicated_isid": false, 00:04:40.661 "error_recovery_level": 0, 00:04:40.661 "nop_timeout": 60, 00:04:40.661 "nop_in_interval": 30, 00:04:40.661 "disable_chap": false, 00:04:40.661 "require_chap": false, 00:04:40.661 "mutual_chap": false, 00:04:40.661 "chap_group": 0, 00:04:40.661 "max_large_datain_per_connection": 64, 00:04:40.661 "max_r2t_per_connection": 4, 00:04:40.661 "pdu_pool_size": 36864, 00:04:40.661 "immediate_data_pool_size": 16384, 00:04:40.661 "data_out_pool_size": 2048 00:04:40.661 } 00:04:40.661 } 00:04:40.661 ] 00:04:40.661 } 00:04:40.661 ] 00:04:40.661 } 00:04:40.661 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:40.661 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 105813 00:04:40.661 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105813 ']' 00:04:40.661 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105813 00:04:40.661 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:40.661 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.661 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105813 00:04:40.661 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.661 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.661 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105813' 00:04:40.661 killing process with pid 105813 00:04:40.661 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105813 00:04:40.661 05:20:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105813 00:04:40.919 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=105921 00:04:40.919 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.919 05:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:46.191 05:20:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 105921 00:04:46.191 05:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105921 ']' 00:04:46.191 05:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105921 00:04:46.191 05:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:46.191 05:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.191 05:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105921 00:04:46.191 05:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.191 05:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.191 05:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105921' 00:04:46.191 killing process with pid 105921 00:04:46.191 05:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105921 00:04:46.191 05:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105921 00:04:46.450 05:20:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:46.450 05:20:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:46.450 00:04:46.450 real 0m6.246s 00:04:46.450 user 0m5.930s 00:04:46.450 sys 0m0.620s 00:04:46.450 05:20:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.451 ************************************ 00:04:46.451 END TEST skip_rpc_with_json 00:04:46.451 ************************************ 00:04:46.451 05:20:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:46.451 05:20:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.451 05:20:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.451 05:20:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.451 ************************************ 00:04:46.451 START TEST skip_rpc_with_delay 00:04:46.451 ************************************ 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.451 [2024-12-13 05:20:46.399747] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.451 00:04:46.451 real 0m0.067s 00:04:46.451 user 0m0.042s 00:04:46.451 sys 0m0.024s 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.451 05:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:46.451 ************************************ 00:04:46.451 END TEST skip_rpc_with_delay 00:04:46.451 ************************************ 00:04:46.451 05:20:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:46.451 05:20:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:46.451 05:20:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:46.451 05:20:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.451 05:20:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.451 05:20:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.710 ************************************ 00:04:46.710 START TEST exit_on_failed_rpc_init 00:04:46.710 ************************************ 00:04:46.710 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:46.710 05:20:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=106960 00:04:46.710 05:20:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 106960 00:04:46.710 05:20:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.710 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 106960 ']' 00:04:46.710 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.710 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.710 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.710 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.710 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:46.710 [2024-12-13 05:20:46.541603] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:46.710 [2024-12-13 05:20:46.541646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106960 ] 00:04:46.710 [2024-12-13 05:20:46.616341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.710 [2024-12-13 05:20:46.639324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:46.969 05:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.969 [2024-12-13 05:20:46.892561] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:46.969 [2024-12-13 05:20:46.892606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106994 ] 00:04:46.970 [2024-12-13 05:20:46.964007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.229 [2024-12-13 05:20:46.986268] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.229 [2024-12-13 05:20:46.986320] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:47.229 [2024-12-13 05:20:46.986329] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:47.229 [2024-12-13 05:20:46.986335] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 106960 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 106960 ']' 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 106960 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106960 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106960' 00:04:47.229 killing process with pid 106960 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 106960 00:04:47.229 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 106960 00:04:47.488 00:04:47.488 real 0m0.877s 00:04:47.488 user 0m0.908s 00:04:47.488 sys 0m0.391s 00:04:47.488 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.488 05:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:47.488 ************************************ 00:04:47.488 END TEST exit_on_failed_rpc_init 00:04:47.488 ************************************ 00:04:47.488 05:20:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:47.488 00:04:47.488 real 0m13.012s 00:04:47.488 user 0m12.215s 00:04:47.488 sys 0m1.590s 00:04:47.488 05:20:47 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.488 05:20:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.488 ************************************ 00:04:47.488 END TEST skip_rpc 00:04:47.488 ************************************ 00:04:47.489 05:20:47 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:47.489 05:20:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.489 05:20:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.489 05:20:47 -- common/autotest_common.sh@10 -- # set +x 00:04:47.489 ************************************ 00:04:47.489 START TEST rpc_client 00:04:47.489 ************************************ 00:04:47.489 05:20:47 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:47.748 * Looking for test storage... 00:04:47.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:47.748 05:20:47 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.748 05:20:47 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.749 05:20:47 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.749 05:20:47 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.749 05:20:47 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:47.749 05:20:47 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.749 05:20:47 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.749 --rc genhtml_branch_coverage=1 00:04:47.749 --rc genhtml_function_coverage=1 00:04:47.749 --rc genhtml_legend=1 00:04:47.749 --rc geninfo_all_blocks=1 00:04:47.749 --rc geninfo_unexecuted_blocks=1 00:04:47.749 00:04:47.749 ' 00:04:47.749 05:20:47 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.749 --rc genhtml_branch_coverage=1 00:04:47.749 --rc genhtml_function_coverage=1 00:04:47.749 --rc genhtml_legend=1 00:04:47.749 --rc geninfo_all_blocks=1 00:04:47.749 --rc geninfo_unexecuted_blocks=1 00:04:47.749 00:04:47.749 ' 00:04:47.749 05:20:47 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.749 --rc genhtml_branch_coverage=1 00:04:47.749 --rc genhtml_function_coverage=1 00:04:47.749 --rc genhtml_legend=1 00:04:47.749 --rc geninfo_all_blocks=1 00:04:47.749 --rc geninfo_unexecuted_blocks=1 00:04:47.749 00:04:47.749 ' 00:04:47.749 05:20:47 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.749 --rc genhtml_branch_coverage=1 00:04:47.749 --rc genhtml_function_coverage=1 00:04:47.749 --rc genhtml_legend=1 00:04:47.749 --rc geninfo_all_blocks=1 00:04:47.749 --rc geninfo_unexecuted_blocks=1 00:04:47.749 00:04:47.749 ' 00:04:47.749 05:20:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:47.749 OK 00:04:47.749 05:20:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:47.749 00:04:47.749 real 0m0.200s 00:04:47.749 user 0m0.120s 00:04:47.749 sys 0m0.093s 00:04:47.749 05:20:47 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.749 05:20:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:47.749 ************************************ 00:04:47.749 END TEST rpc_client 00:04:47.749 ************************************ 00:04:47.749 05:20:47 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:47.749 05:20:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.749 05:20:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.749 05:20:47 -- common/autotest_common.sh@10 -- # set +x 00:04:47.749 ************************************ 00:04:47.749 START TEST json_config 00:04:47.749 ************************************ 00:04:47.749 05:20:47 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:48.009 05:20:47 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.009 05:20:47 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.009 05:20:47 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.009 05:20:47 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.009 05:20:47 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.009 05:20:47 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.009 05:20:47 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.009 05:20:47 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.009 05:20:47 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.009 05:20:47 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.009 05:20:47 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.009 05:20:47 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.009 05:20:47 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.009 05:20:47 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.009 05:20:47 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.009 05:20:47 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:48.009 05:20:47 json_config -- scripts/common.sh@345 -- # : 1 00:04:48.009 05:20:47 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.009 05:20:47 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.009 05:20:47 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:48.009 05:20:47 json_config -- scripts/common.sh@353 -- # local d=1 00:04:48.009 05:20:47 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.009 05:20:47 json_config -- scripts/common.sh@355 -- # echo 1 00:04:48.009 05:20:47 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.009 05:20:47 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:48.009 05:20:47 json_config -- scripts/common.sh@353 -- # local d=2 00:04:48.009 05:20:47 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.009 05:20:47 json_config -- scripts/common.sh@355 -- # echo 2 00:04:48.009 05:20:47 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.009 05:20:47 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.009 05:20:47 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.009 05:20:47 json_config -- scripts/common.sh@368 -- # return 0 00:04:48.009 05:20:47 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.009 05:20:47 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.009 --rc genhtml_branch_coverage=1 00:04:48.009 --rc genhtml_function_coverage=1 00:04:48.009 --rc genhtml_legend=1 00:04:48.009 --rc geninfo_all_blocks=1 00:04:48.009 --rc geninfo_unexecuted_blocks=1 00:04:48.009 00:04:48.009 ' 00:04:48.009 05:20:47 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.009 --rc genhtml_branch_coverage=1 00:04:48.009 --rc genhtml_function_coverage=1 00:04:48.009 --rc genhtml_legend=1 00:04:48.009 --rc geninfo_all_blocks=1 00:04:48.009 --rc geninfo_unexecuted_blocks=1 00:04:48.009 00:04:48.009 ' 00:04:48.009 05:20:47 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.009 --rc genhtml_branch_coverage=1 00:04:48.009 --rc genhtml_function_coverage=1 00:04:48.009 --rc genhtml_legend=1 00:04:48.009 --rc geninfo_all_blocks=1 00:04:48.009 --rc geninfo_unexecuted_blocks=1 00:04:48.009 00:04:48.009 ' 00:04:48.009 05:20:47 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.009 --rc genhtml_branch_coverage=1 00:04:48.009 --rc genhtml_function_coverage=1 00:04:48.009 --rc genhtml_legend=1 00:04:48.009 --rc geninfo_all_blocks=1 00:04:48.009 --rc geninfo_unexecuted_blocks=1 00:04:48.009 00:04:48.009 ' 00:04:48.009 05:20:47 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.009 05:20:47 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:48.009 05:20:47 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:48.009 05:20:47 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.009 05:20:47 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.009 05:20:47 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.009 05:20:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.009 05:20:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.009 05:20:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.010 05:20:47 json_config -- paths/export.sh@5 -- # export PATH 00:04:48.010 05:20:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.010 05:20:47 json_config -- nvmf/common.sh@51 -- # : 0 00:04:48.010 05:20:47 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:48.010 05:20:47 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:48.010 05:20:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.010 05:20:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.010 05:20:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.010 05:20:47 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:48.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:48.010 05:20:47 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:48.010 05:20:47 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:48.010 05:20:47 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:48.010 INFO: JSON configuration test init 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:48.010 05:20:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.010 05:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:48.010 05:20:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.010 05:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.010 05:20:47 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:48.010 05:20:47 json_config -- json_config/common.sh@9 -- # local app=target 00:04:48.010 05:20:47 json_config -- json_config/common.sh@10 -- # shift 00:04:48.010 05:20:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.010 05:20:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.010 05:20:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.010 05:20:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.010 05:20:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.010 05:20:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=107342 00:04:48.010 05:20:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.010 Waiting for target to run... 00:04:48.010 05:20:47 json_config -- json_config/common.sh@25 -- # waitforlisten 107342 /var/tmp/spdk_tgt.sock 00:04:48.010 05:20:47 json_config -- common/autotest_common.sh@835 -- # '[' -z 107342 ']' 00:04:48.010 05:20:47 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.010 05:20:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:48.010 05:20:47 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.010 05:20:47 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.010 05:20:47 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.010 05:20:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.010 [2024-12-13 05:20:47.977184] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:48.010 [2024-12-13 05:20:47.977229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107342 ] 00:04:48.578 [2024-12-13 05:20:48.426409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.578 [2024-12-13 05:20:48.447992] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.837 05:20:48 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.837 05:20:48 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:48.837 05:20:48 json_config -- json_config/common.sh@26 -- # echo '' 00:04:48.837 00:04:48.837 05:20:48 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:48.837 05:20:48 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:48.837 05:20:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.837 05:20:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.837 05:20:48 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:48.837 05:20:48 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:48.837 05:20:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.837 05:20:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.837 05:20:48 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:48.837 05:20:48 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:48.837 05:20:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:52.126 05:20:51 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:52.126 05:20:51 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:52.126 05:20:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.126 05:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.126 05:20:51 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:52.126 05:20:51 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:52.126 05:20:51 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:52.126 05:20:51 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:52.126 05:20:51 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:52.126 05:20:51 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:52.126 05:20:51 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:52.126 05:20:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:52.126 05:20:52 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:52.126 05:20:52 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:52.126 05:20:52 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:52.126 05:20:52 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:52.126 05:20:52 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:52.126 05:20:52 json_config -- json_config/json_config.sh@54 -- # sort 00:04:52.126 05:20:52 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:52.126 05:20:52 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:52.126 05:20:52 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:52.126 05:20:52 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:52.126 05:20:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.127 05:20:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.385 05:20:52 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:52.385 05:20:52 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:52.385 05:20:52 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:52.385 05:20:52 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:52.385 05:20:52 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:52.385 05:20:52 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:52.385 05:20:52 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:52.385 05:20:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.385 05:20:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.385 05:20:52 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:52.386 05:20:52 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:52.386 05:20:52 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:52.386 05:20:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:52.386 05:20:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:52.386 MallocForNvmf0 00:04:52.386 05:20:52 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:52.386 05:20:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:52.645 MallocForNvmf1 00:04:52.645 05:20:52 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:52.645 05:20:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:52.904 [2024-12-13 05:20:52.717430] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:52.904 05:20:52 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:52.904 05:20:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:53.163 05:20:52 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.163 05:20:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.163 05:20:53 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.163 05:20:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.423 05:20:53 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:53.423 05:20:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:53.682 [2024-12-13 05:20:53.519839] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:53.682 05:20:53 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:53.682 05:20:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.682 05:20:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.682 05:20:53 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:53.682 05:20:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.682 05:20:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.682 05:20:53 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:53.682 05:20:53 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:53.682 05:20:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:53.941 MallocBdevForConfigChangeCheck 00:04:53.941 05:20:53 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:53.941 05:20:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.941 05:20:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.941 05:20:53 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:53.941 05:20:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.200 05:20:54 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:54.200 INFO: shutting down applications... 00:04:54.200 05:20:54 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:54.200 05:20:54 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:54.200 05:20:54 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:54.200 05:20:54 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:56.106 Calling clear_iscsi_subsystem 00:04:56.106 Calling clear_nvmf_subsystem 00:04:56.106 Calling clear_nbd_subsystem 00:04:56.106 Calling clear_ublk_subsystem 00:04:56.106 Calling clear_vhost_blk_subsystem 00:04:56.106 Calling clear_vhost_scsi_subsystem 00:04:56.106 Calling clear_bdev_subsystem 00:04:56.106 05:20:55 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:56.106 05:20:55 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:56.106 05:20:55 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:56.106 05:20:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.106 05:20:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:56.106 05:20:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:56.365 05:20:56 json_config -- json_config/json_config.sh@352 -- # break 00:04:56.365 05:20:56 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:56.365 05:20:56 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:56.365 05:20:56 json_config -- json_config/common.sh@31 -- # local app=target 00:04:56.366 05:20:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.366 05:20:56 json_config -- json_config/common.sh@35 -- # [[ -n 107342 ]] 00:04:56.366 05:20:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 107342 00:04:56.366 05:20:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.366 05:20:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.366 05:20:56 json_config -- json_config/common.sh@41 -- # kill -0 107342 00:04:56.366 05:20:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.934 05:20:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.934 05:20:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.934 05:20:56 json_config -- json_config/common.sh@41 -- # kill -0 107342 00:04:56.934 05:20:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:56.934 05:20:56 json_config -- json_config/common.sh@43 -- # break 00:04:56.934 05:20:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:56.934 05:20:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:56.934 SPDK target shutdown done 00:04:56.934 05:20:56 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:56.934 INFO: relaunching applications... 00:04:56.934 05:20:56 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.934 05:20:56 json_config -- json_config/common.sh@9 -- # local app=target 00:04:56.934 05:20:56 json_config -- json_config/common.sh@10 -- # shift 00:04:56.934 05:20:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.934 05:20:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.934 05:20:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.934 05:20:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.934 05:20:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.934 05:20:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=108829 00:04:56.934 05:20:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.934 Waiting for target to run... 00:04:56.934 05:20:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.934 05:20:56 json_config -- json_config/common.sh@25 -- # waitforlisten 108829 /var/tmp/spdk_tgt.sock 00:04:56.934 05:20:56 json_config -- common/autotest_common.sh@835 -- # '[' -z 108829 ']' 00:04:56.934 05:20:56 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.934 05:20:56 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.934 05:20:56 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.934 05:20:56 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.934 05:20:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.934 [2024-12-13 05:20:56.743692] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:56.934 [2024-12-13 05:20:56.743750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108829 ] 00:04:57.194 [2024-12-13 05:20:57.035585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.194 [2024-12-13 05:20:57.048997] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.484 [2024-12-13 05:21:00.059354] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:00.484 [2024-12-13 05:21:00.091619] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:00.484 05:21:00 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.484 05:21:00 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:00.484 05:21:00 json_config -- json_config/common.sh@26 -- # echo '' 00:05:00.484 00:05:00.484 05:21:00 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:00.484 05:21:00 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:00.484 INFO: Checking if target configuration is the same... 00:05:00.484 05:21:00 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.484 05:21:00 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:00.484 05:21:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:00.484 + '[' 2 -ne 2 ']' 00:05:00.484 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:00.484 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:00.484 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:00.484 +++ basename /dev/fd/62 00:05:00.484 ++ mktemp /tmp/62.XXX 00:05:00.484 + tmp_file_1=/tmp/62.fCy 00:05:00.484 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.484 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:00.484 + tmp_file_2=/tmp/spdk_tgt_config.json.3vT 00:05:00.484 + ret=0 00:05:00.484 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:00.484 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:00.743 + diff -u /tmp/62.fCy /tmp/spdk_tgt_config.json.3vT 00:05:00.743 + echo 'INFO: JSON config files are the same' 00:05:00.743 INFO: JSON config files are the same 00:05:00.743 + rm /tmp/62.fCy /tmp/spdk_tgt_config.json.3vT 00:05:00.743 + exit 0 00:05:00.743 05:21:00 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:00.743 05:21:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:00.743 INFO: changing configuration and checking if this can be detected... 00:05:00.743 05:21:00 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:00.743 05:21:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:00.743 05:21:00 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:00.743 05:21:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:00.743 05:21:00 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.743 + '[' 2 -ne 2 ']' 00:05:00.743 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:00.743 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:00.743 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:00.743 +++ basename /dev/fd/62 00:05:00.743 ++ mktemp /tmp/62.XXX 00:05:00.743 + tmp_file_1=/tmp/62.4T4 00:05:00.743 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.743 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:00.743 + tmp_file_2=/tmp/spdk_tgt_config.json.iTQ 00:05:00.743 + ret=0 00:05:00.743 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:01.312 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:01.312 + diff -u /tmp/62.4T4 /tmp/spdk_tgt_config.json.iTQ 00:05:01.312 + ret=1 00:05:01.312 + echo '=== Start of file: /tmp/62.4T4 ===' 00:05:01.312 + cat /tmp/62.4T4 00:05:01.312 + echo '=== End of file: /tmp/62.4T4 ===' 00:05:01.312 + echo '' 00:05:01.312 + echo '=== Start of file: /tmp/spdk_tgt_config.json.iTQ ===' 00:05:01.312 + cat /tmp/spdk_tgt_config.json.iTQ 00:05:01.312 + echo '=== End of file: /tmp/spdk_tgt_config.json.iTQ ===' 00:05:01.312 + echo '' 00:05:01.312 + rm /tmp/62.4T4 /tmp/spdk_tgt_config.json.iTQ 00:05:01.312 + exit 1 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:01.312 INFO: configuration change detected. 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@324 -- # [[ -n 108829 ]] 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.312 05:21:01 json_config -- json_config/json_config.sh@330 -- # killprocess 108829 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@954 -- # '[' -z 108829 ']' 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@958 -- # kill -0 108829 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@959 -- # uname 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108829 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108829' 00:05:01.312 killing process with pid 108829 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@973 -- # kill 108829 00:05:01.312 05:21:01 json_config -- common/autotest_common.sh@978 -- # wait 108829 00:05:03.221 05:21:02 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.221 05:21:02 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:03.221 05:21:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:03.221 05:21:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.221 05:21:02 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:03.221 05:21:02 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:03.221 INFO: Success 00:05:03.221 00:05:03.221 real 0m15.034s 00:05:03.221 user 0m16.120s 00:05:03.221 sys 0m1.931s 00:05:03.221 05:21:02 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.221 05:21:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.221 ************************************ 00:05:03.221 END TEST json_config 00:05:03.221 ************************************ 00:05:03.221 05:21:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:03.221 05:21:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.221 05:21:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.221 05:21:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.221 ************************************ 00:05:03.221 START TEST json_config_extra_key 00:05:03.221 ************************************ 00:05:03.221 05:21:02 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:03.221 05:21:02 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.221 05:21:02 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.221 05:21:02 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.221 05:21:02 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.221 05:21:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:03.221 05:21:02 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.221 05:21:02 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.221 --rc genhtml_branch_coverage=1 00:05:03.221 --rc genhtml_function_coverage=1 00:05:03.221 --rc genhtml_legend=1 00:05:03.221 --rc geninfo_all_blocks=1 00:05:03.221 --rc geninfo_unexecuted_blocks=1 00:05:03.221 00:05:03.221 ' 00:05:03.221 05:21:02 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.221 --rc genhtml_branch_coverage=1 00:05:03.221 --rc genhtml_function_coverage=1 00:05:03.221 --rc genhtml_legend=1 00:05:03.221 --rc geninfo_all_blocks=1 00:05:03.221 --rc geninfo_unexecuted_blocks=1 00:05:03.221 00:05:03.221 ' 00:05:03.221 05:21:02 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.221 --rc genhtml_branch_coverage=1 00:05:03.221 --rc genhtml_function_coverage=1 00:05:03.221 --rc genhtml_legend=1 00:05:03.221 --rc geninfo_all_blocks=1 00:05:03.221 --rc geninfo_unexecuted_blocks=1 00:05:03.221 00:05:03.221 ' 00:05:03.221 05:21:02 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.221 --rc genhtml_branch_coverage=1 00:05:03.221 --rc genhtml_function_coverage=1 00:05:03.221 --rc genhtml_legend=1 00:05:03.221 --rc geninfo_all_blocks=1 00:05:03.221 --rc geninfo_unexecuted_blocks=1 00:05:03.221 00:05:03.221 ' 00:05:03.221 05:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:03.221 05:21:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:03.221 05:21:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.221 05:21:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.221 05:21:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.221 05:21:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.221 05:21:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.221 05:21:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.221 05:21:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.221 05:21:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.221 05:21:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.221 05:21:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:03.221 05:21:03 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.221 05:21:03 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.221 05:21:03 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.221 05:21:03 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.221 05:21:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.221 05:21:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.221 05:21:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.221 05:21:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:03.221 05:21:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.221 05:21:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.222 05:21:03 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.222 05:21:03 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.222 05:21:03 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.222 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:03.222 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:03.222 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:03.222 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:03.222 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:03.222 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:03.222 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:03.222 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:03.222 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:03.222 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:03.222 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:03.222 INFO: launching applications... 00:05:03.222 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:03.222 05:21:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:03.222 05:21:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:03.222 05:21:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.222 05:21:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.222 05:21:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.222 05:21:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.222 05:21:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.222 05:21:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=110192 00:05:03.222 05:21:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.222 Waiting for target to run... 00:05:03.222 05:21:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 110192 /var/tmp/spdk_tgt.sock 00:05:03.222 05:21:03 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 110192 ']' 00:05:03.222 05:21:03 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:03.222 05:21:03 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.222 05:21:03 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.222 05:21:03 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.222 05:21:03 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.222 05:21:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:03.222 [2024-12-13 05:21:03.080308] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:03.222 [2024-12-13 05:21:03.080356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110192 ] 00:05:03.482 [2024-12-13 05:21:03.363715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.482 [2024-12-13 05:21:03.376536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.050 05:21:03 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.050 05:21:03 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:04.050 05:21:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:04.050 00:05:04.050 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:04.050 INFO: shutting down applications... 00:05:04.050 05:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:04.050 05:21:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:04.050 05:21:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.050 05:21:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 110192 ]] 00:05:04.050 05:21:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 110192 00:05:04.050 05:21:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.050 05:21:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.050 05:21:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 110192 00:05:04.050 05:21:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.620 05:21:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.620 05:21:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.620 05:21:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 110192 00:05:04.620 05:21:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:04.620 05:21:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:04.620 05:21:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:04.620 05:21:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:04.620 SPDK target shutdown done 00:05:04.620 05:21:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:04.620 Success 00:05:04.620 00:05:04.620 real 0m1.565s 00:05:04.620 user 0m1.330s 00:05:04.620 sys 0m0.403s 00:05:04.620 05:21:04 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.620 05:21:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:04.620 ************************************ 00:05:04.620 END TEST json_config_extra_key 00:05:04.620 ************************************ 00:05:04.620 05:21:04 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:04.620 05:21:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.620 05:21:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.620 05:21:04 -- common/autotest_common.sh@10 -- # set +x 00:05:04.620 ************************************ 00:05:04.620 START TEST alias_rpc 00:05:04.620 ************************************ 00:05:04.620 05:21:04 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:04.620 * Looking for test storage... 00:05:04.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:04.620 05:21:04 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.620 05:21:04 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.620 05:21:04 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.620 05:21:04 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.880 05:21:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:04.880 05:21:04 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.880 05:21:04 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.880 --rc genhtml_branch_coverage=1 00:05:04.880 --rc genhtml_function_coverage=1 00:05:04.880 --rc genhtml_legend=1 00:05:04.880 --rc geninfo_all_blocks=1 00:05:04.880 --rc geninfo_unexecuted_blocks=1 00:05:04.880 00:05:04.880 ' 00:05:04.880 05:21:04 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.880 --rc genhtml_branch_coverage=1 00:05:04.880 --rc genhtml_function_coverage=1 00:05:04.880 --rc genhtml_legend=1 00:05:04.880 --rc geninfo_all_blocks=1 00:05:04.880 --rc geninfo_unexecuted_blocks=1 00:05:04.880 00:05:04.880 ' 00:05:04.880 05:21:04 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.880 --rc genhtml_branch_coverage=1 00:05:04.880 --rc genhtml_function_coverage=1 00:05:04.880 --rc genhtml_legend=1 00:05:04.880 --rc geninfo_all_blocks=1 00:05:04.880 --rc geninfo_unexecuted_blocks=1 00:05:04.880 00:05:04.880 ' 00:05:04.880 05:21:04 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.880 --rc genhtml_branch_coverage=1 00:05:04.880 --rc genhtml_function_coverage=1 00:05:04.880 --rc genhtml_legend=1 00:05:04.880 --rc geninfo_all_blocks=1 00:05:04.880 --rc geninfo_unexecuted_blocks=1 00:05:04.880 00:05:04.880 ' 00:05:04.880 05:21:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:04.880 05:21:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=110478 00:05:04.880 05:21:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 110478 00:05:04.880 05:21:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.880 05:21:04 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 110478 ']' 00:05:04.880 05:21:04 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.880 05:21:04 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.880 05:21:04 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.880 05:21:04 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.880 05:21:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.880 [2024-12-13 05:21:04.705028] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:04.880 [2024-12-13 05:21:04.705075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110478 ] 00:05:04.880 [2024-12-13 05:21:04.778322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.880 [2024-12-13 05:21:04.800486] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.140 05:21:05 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.140 05:21:05 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:05.140 05:21:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:05.399 05:21:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 110478 00:05:05.399 05:21:05 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 110478 ']' 00:05:05.399 05:21:05 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 110478 00:05:05.399 05:21:05 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:05.399 05:21:05 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.399 05:21:05 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110478 00:05:05.399 05:21:05 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.399 05:21:05 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.399 05:21:05 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110478' 00:05:05.399 killing process with pid 110478 00:05:05.399 05:21:05 alias_rpc -- common/autotest_common.sh@973 -- # kill 110478 00:05:05.399 05:21:05 alias_rpc -- common/autotest_common.sh@978 -- # wait 110478 00:05:05.658 00:05:05.658 real 0m1.104s 00:05:05.658 user 0m1.119s 00:05:05.658 sys 0m0.423s 00:05:05.658 05:21:05 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.658 05:21:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.658 ************************************ 00:05:05.658 END TEST alias_rpc 00:05:05.658 ************************************ 00:05:05.658 05:21:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:05.658 05:21:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:05.658 05:21:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.658 05:21:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.658 05:21:05 -- common/autotest_common.sh@10 -- # set +x 00:05:05.658 ************************************ 00:05:05.658 START TEST spdkcli_tcp 00:05:05.658 ************************************ 00:05:05.658 05:21:05 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:05.918 * Looking for test storage... 00:05:05.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.918 05:21:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.918 --rc genhtml_branch_coverage=1 00:05:05.918 --rc genhtml_function_coverage=1 00:05:05.918 --rc genhtml_legend=1 00:05:05.918 --rc geninfo_all_blocks=1 00:05:05.918 --rc geninfo_unexecuted_blocks=1 00:05:05.918 00:05:05.918 ' 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.918 --rc genhtml_branch_coverage=1 00:05:05.918 --rc genhtml_function_coverage=1 00:05:05.918 --rc genhtml_legend=1 00:05:05.918 --rc geninfo_all_blocks=1 00:05:05.918 --rc geninfo_unexecuted_blocks=1 00:05:05.918 00:05:05.918 ' 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.918 --rc genhtml_branch_coverage=1 00:05:05.918 --rc genhtml_function_coverage=1 00:05:05.918 --rc genhtml_legend=1 00:05:05.918 --rc geninfo_all_blocks=1 00:05:05.918 --rc geninfo_unexecuted_blocks=1 00:05:05.918 00:05:05.918 ' 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.918 --rc genhtml_branch_coverage=1 00:05:05.918 --rc genhtml_function_coverage=1 00:05:05.918 --rc genhtml_legend=1 00:05:05.918 --rc geninfo_all_blocks=1 00:05:05.918 --rc geninfo_unexecuted_blocks=1 00:05:05.918 00:05:05.918 ' 00:05:05.918 05:21:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:05.918 05:21:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:05.918 05:21:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:05.918 05:21:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:05.918 05:21:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:05.918 05:21:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:05.918 05:21:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.918 05:21:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=110761 00:05:05.918 05:21:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 110761 00:05:05.918 05:21:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 110761 ']' 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.918 05:21:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.918 [2024-12-13 05:21:05.883247] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:05.918 [2024-12-13 05:21:05.883292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110761 ] 00:05:06.178 [2024-12-13 05:21:05.958311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.178 [2024-12-13 05:21:05.982556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.178 [2024-12-13 05:21:05.982558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.178 05:21:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.178 05:21:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:06.179 05:21:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=110765 00:05:06.179 05:21:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:06.179 05:21:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:06.439 [ 00:05:06.439 "bdev_malloc_delete", 00:05:06.439 "bdev_malloc_create", 00:05:06.439 "bdev_null_resize", 00:05:06.439 "bdev_null_delete", 00:05:06.439 "bdev_null_create", 00:05:06.439 "bdev_nvme_cuse_unregister", 00:05:06.439 "bdev_nvme_cuse_register", 00:05:06.439 "bdev_opal_new_user", 00:05:06.439 "bdev_opal_set_lock_state", 00:05:06.439 "bdev_opal_delete", 00:05:06.439 "bdev_opal_get_info", 00:05:06.439 "bdev_opal_create", 00:05:06.439 "bdev_nvme_opal_revert", 00:05:06.439 "bdev_nvme_opal_init", 00:05:06.439 "bdev_nvme_send_cmd", 00:05:06.439 "bdev_nvme_set_keys", 00:05:06.439 "bdev_nvme_get_path_iostat", 00:05:06.439 "bdev_nvme_get_mdns_discovery_info", 00:05:06.439 "bdev_nvme_stop_mdns_discovery", 00:05:06.439 "bdev_nvme_start_mdns_discovery", 00:05:06.439 "bdev_nvme_set_multipath_policy", 00:05:06.439 "bdev_nvme_set_preferred_path", 00:05:06.439 "bdev_nvme_get_io_paths", 00:05:06.439 "bdev_nvme_remove_error_injection", 00:05:06.439 "bdev_nvme_add_error_injection", 00:05:06.439 "bdev_nvme_get_discovery_info", 00:05:06.439 "bdev_nvme_stop_discovery", 00:05:06.439 "bdev_nvme_start_discovery", 00:05:06.439 "bdev_nvme_get_controller_health_info", 00:05:06.439 "bdev_nvme_disable_controller", 00:05:06.439 "bdev_nvme_enable_controller", 00:05:06.439 "bdev_nvme_reset_controller", 00:05:06.439 "bdev_nvme_get_transport_statistics", 00:05:06.439 "bdev_nvme_apply_firmware", 00:05:06.439 "bdev_nvme_detach_controller", 00:05:06.439 "bdev_nvme_get_controllers", 00:05:06.439 "bdev_nvme_attach_controller", 00:05:06.439 "bdev_nvme_set_hotplug", 00:05:06.439 "bdev_nvme_set_options", 00:05:06.439 "bdev_passthru_delete", 00:05:06.439 "bdev_passthru_create", 00:05:06.439 "bdev_lvol_set_parent_bdev", 00:05:06.439 "bdev_lvol_set_parent", 00:05:06.439 "bdev_lvol_check_shallow_copy", 00:05:06.439 "bdev_lvol_start_shallow_copy", 00:05:06.439 "bdev_lvol_grow_lvstore", 00:05:06.439 "bdev_lvol_get_lvols", 00:05:06.439 "bdev_lvol_get_lvstores", 00:05:06.439 "bdev_lvol_delete", 00:05:06.439 "bdev_lvol_set_read_only", 00:05:06.439 "bdev_lvol_resize", 00:05:06.439 "bdev_lvol_decouple_parent", 00:05:06.439 "bdev_lvol_inflate", 00:05:06.439 "bdev_lvol_rename", 00:05:06.439 "bdev_lvol_clone_bdev", 00:05:06.439 "bdev_lvol_clone", 00:05:06.439 "bdev_lvol_snapshot", 00:05:06.439 "bdev_lvol_create", 00:05:06.439 "bdev_lvol_delete_lvstore", 00:05:06.439 "bdev_lvol_rename_lvstore", 00:05:06.439 "bdev_lvol_create_lvstore", 00:05:06.439 "bdev_raid_set_options", 00:05:06.439 "bdev_raid_remove_base_bdev", 00:05:06.439 "bdev_raid_add_base_bdev", 00:05:06.439 "bdev_raid_delete", 00:05:06.439 "bdev_raid_create", 00:05:06.439 "bdev_raid_get_bdevs", 00:05:06.439 "bdev_error_inject_error", 00:05:06.439 "bdev_error_delete", 00:05:06.439 "bdev_error_create", 00:05:06.439 "bdev_split_delete", 00:05:06.439 "bdev_split_create", 00:05:06.439 "bdev_delay_delete", 00:05:06.439 "bdev_delay_create", 00:05:06.439 "bdev_delay_update_latency", 00:05:06.439 "bdev_zone_block_delete", 00:05:06.439 "bdev_zone_block_create", 00:05:06.439 "blobfs_create", 00:05:06.439 "blobfs_detect", 00:05:06.439 "blobfs_set_cache_size", 00:05:06.439 "bdev_aio_delete", 00:05:06.439 "bdev_aio_rescan", 00:05:06.439 "bdev_aio_create", 00:05:06.439 "bdev_ftl_set_property", 00:05:06.439 "bdev_ftl_get_properties", 00:05:06.439 "bdev_ftl_get_stats", 00:05:06.439 "bdev_ftl_unmap", 00:05:06.439 "bdev_ftl_unload", 00:05:06.439 "bdev_ftl_delete", 00:05:06.439 "bdev_ftl_load", 00:05:06.439 "bdev_ftl_create", 00:05:06.439 "bdev_virtio_attach_controller", 00:05:06.439 "bdev_virtio_scsi_get_devices", 00:05:06.439 "bdev_virtio_detach_controller", 00:05:06.439 "bdev_virtio_blk_set_hotplug", 00:05:06.439 "bdev_iscsi_delete", 00:05:06.439 "bdev_iscsi_create", 00:05:06.439 "bdev_iscsi_set_options", 00:05:06.439 "accel_error_inject_error", 00:05:06.439 "ioat_scan_accel_module", 00:05:06.439 "dsa_scan_accel_module", 00:05:06.439 "iaa_scan_accel_module", 00:05:06.440 "vfu_virtio_create_fs_endpoint", 00:05:06.440 "vfu_virtio_create_scsi_endpoint", 00:05:06.440 "vfu_virtio_scsi_remove_target", 00:05:06.440 "vfu_virtio_scsi_add_target", 00:05:06.440 "vfu_virtio_create_blk_endpoint", 00:05:06.440 "vfu_virtio_delete_endpoint", 00:05:06.440 "keyring_file_remove_key", 00:05:06.440 "keyring_file_add_key", 00:05:06.440 "keyring_linux_set_options", 00:05:06.440 "fsdev_aio_delete", 00:05:06.440 "fsdev_aio_create", 00:05:06.440 "iscsi_get_histogram", 00:05:06.440 "iscsi_enable_histogram", 00:05:06.440 "iscsi_set_options", 00:05:06.440 "iscsi_get_auth_groups", 00:05:06.440 "iscsi_auth_group_remove_secret", 00:05:06.440 "iscsi_auth_group_add_secret", 00:05:06.440 "iscsi_delete_auth_group", 00:05:06.440 "iscsi_create_auth_group", 00:05:06.440 "iscsi_set_discovery_auth", 00:05:06.440 "iscsi_get_options", 00:05:06.440 "iscsi_target_node_request_logout", 00:05:06.440 "iscsi_target_node_set_redirect", 00:05:06.440 "iscsi_target_node_set_auth", 00:05:06.440 "iscsi_target_node_add_lun", 00:05:06.440 "iscsi_get_stats", 00:05:06.440 "iscsi_get_connections", 00:05:06.440 "iscsi_portal_group_set_auth", 00:05:06.440 "iscsi_start_portal_group", 00:05:06.440 "iscsi_delete_portal_group", 00:05:06.440 "iscsi_create_portal_group", 00:05:06.440 "iscsi_get_portal_groups", 00:05:06.440 "iscsi_delete_target_node", 00:05:06.440 "iscsi_target_node_remove_pg_ig_maps", 00:05:06.440 "iscsi_target_node_add_pg_ig_maps", 00:05:06.440 "iscsi_create_target_node", 00:05:06.440 "iscsi_get_target_nodes", 00:05:06.440 "iscsi_delete_initiator_group", 00:05:06.440 "iscsi_initiator_group_remove_initiators", 00:05:06.440 "iscsi_initiator_group_add_initiators", 00:05:06.440 "iscsi_create_initiator_group", 00:05:06.440 "iscsi_get_initiator_groups", 00:05:06.440 "nvmf_set_crdt", 00:05:06.440 "nvmf_set_config", 00:05:06.440 "nvmf_set_max_subsystems", 00:05:06.440 "nvmf_stop_mdns_prr", 00:05:06.440 "nvmf_publish_mdns_prr", 00:05:06.440 "nvmf_subsystem_get_listeners", 00:05:06.440 "nvmf_subsystem_get_qpairs", 00:05:06.440 "nvmf_subsystem_get_controllers", 00:05:06.440 "nvmf_get_stats", 00:05:06.440 "nvmf_get_transports", 00:05:06.440 "nvmf_create_transport", 00:05:06.440 "nvmf_get_targets", 00:05:06.440 "nvmf_delete_target", 00:05:06.440 "nvmf_create_target", 00:05:06.440 "nvmf_subsystem_allow_any_host", 00:05:06.440 "nvmf_subsystem_set_keys", 00:05:06.440 "nvmf_subsystem_remove_host", 00:05:06.440 "nvmf_subsystem_add_host", 00:05:06.440 "nvmf_ns_remove_host", 00:05:06.440 "nvmf_ns_add_host", 00:05:06.440 "nvmf_subsystem_remove_ns", 00:05:06.440 "nvmf_subsystem_set_ns_ana_group", 00:05:06.440 "nvmf_subsystem_add_ns", 00:05:06.440 "nvmf_subsystem_listener_set_ana_state", 00:05:06.440 "nvmf_discovery_get_referrals", 00:05:06.440 "nvmf_discovery_remove_referral", 00:05:06.440 "nvmf_discovery_add_referral", 00:05:06.440 "nvmf_subsystem_remove_listener", 00:05:06.440 "nvmf_subsystem_add_listener", 00:05:06.440 "nvmf_delete_subsystem", 00:05:06.440 "nvmf_create_subsystem", 00:05:06.440 "nvmf_get_subsystems", 00:05:06.440 "env_dpdk_get_mem_stats", 00:05:06.440 "nbd_get_disks", 00:05:06.440 "nbd_stop_disk", 00:05:06.440 "nbd_start_disk", 00:05:06.440 "ublk_recover_disk", 00:05:06.440 "ublk_get_disks", 00:05:06.440 "ublk_stop_disk", 00:05:06.440 "ublk_start_disk", 00:05:06.440 "ublk_destroy_target", 00:05:06.440 "ublk_create_target", 00:05:06.440 "virtio_blk_create_transport", 00:05:06.440 "virtio_blk_get_transports", 00:05:06.440 "vhost_controller_set_coalescing", 00:05:06.440 "vhost_get_controllers", 00:05:06.440 "vhost_delete_controller", 00:05:06.440 "vhost_create_blk_controller", 00:05:06.440 "vhost_scsi_controller_remove_target", 00:05:06.440 "vhost_scsi_controller_add_target", 00:05:06.440 "vhost_start_scsi_controller", 00:05:06.440 "vhost_create_scsi_controller", 00:05:06.440 "thread_set_cpumask", 00:05:06.440 "scheduler_set_options", 00:05:06.440 "framework_get_governor", 00:05:06.440 "framework_get_scheduler", 00:05:06.440 "framework_set_scheduler", 00:05:06.440 "framework_get_reactors", 00:05:06.440 "thread_get_io_channels", 00:05:06.440 "thread_get_pollers", 00:05:06.440 "thread_get_stats", 00:05:06.440 "framework_monitor_context_switch", 00:05:06.440 "spdk_kill_instance", 00:05:06.440 "log_enable_timestamps", 00:05:06.440 "log_get_flags", 00:05:06.440 "log_clear_flag", 00:05:06.440 "log_set_flag", 00:05:06.440 "log_get_level", 00:05:06.440 "log_set_level", 00:05:06.440 "log_get_print_level", 00:05:06.440 "log_set_print_level", 00:05:06.440 "framework_enable_cpumask_locks", 00:05:06.440 "framework_disable_cpumask_locks", 00:05:06.440 "framework_wait_init", 00:05:06.440 "framework_start_init", 00:05:06.440 "scsi_get_devices", 00:05:06.440 "bdev_get_histogram", 00:05:06.440 "bdev_enable_histogram", 00:05:06.440 "bdev_set_qos_limit", 00:05:06.440 "bdev_set_qd_sampling_period", 00:05:06.440 "bdev_get_bdevs", 00:05:06.440 "bdev_reset_iostat", 00:05:06.440 "bdev_get_iostat", 00:05:06.440 "bdev_examine", 00:05:06.440 "bdev_wait_for_examine", 00:05:06.440 "bdev_set_options", 00:05:06.440 "accel_get_stats", 00:05:06.440 "accel_set_options", 00:05:06.440 "accel_set_driver", 00:05:06.440 "accel_crypto_key_destroy", 00:05:06.440 "accel_crypto_keys_get", 00:05:06.440 "accel_crypto_key_create", 00:05:06.440 "accel_assign_opc", 00:05:06.440 "accel_get_module_info", 00:05:06.440 "accel_get_opc_assignments", 00:05:06.440 "vmd_rescan", 00:05:06.440 "vmd_remove_device", 00:05:06.440 "vmd_enable", 00:05:06.440 "sock_get_default_impl", 00:05:06.440 "sock_set_default_impl", 00:05:06.440 "sock_impl_set_options", 00:05:06.440 "sock_impl_get_options", 00:05:06.440 "iobuf_get_stats", 00:05:06.440 "iobuf_set_options", 00:05:06.440 "keyring_get_keys", 00:05:06.440 "vfu_tgt_set_base_path", 00:05:06.440 "framework_get_pci_devices", 00:05:06.440 "framework_get_config", 00:05:06.440 "framework_get_subsystems", 00:05:06.440 "fsdev_set_opts", 00:05:06.440 "fsdev_get_opts", 00:05:06.440 "trace_get_info", 00:05:06.440 "trace_get_tpoint_group_mask", 00:05:06.440 "trace_disable_tpoint_group", 00:05:06.440 "trace_enable_tpoint_group", 00:05:06.440 "trace_clear_tpoint_mask", 00:05:06.440 "trace_set_tpoint_mask", 00:05:06.440 "notify_get_notifications", 00:05:06.440 "notify_get_types", 00:05:06.440 "spdk_get_version", 00:05:06.440 "rpc_get_methods" 00:05:06.440 ] 00:05:06.440 05:21:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:06.440 05:21:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.440 05:21:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.440 05:21:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:06.440 05:21:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 110761 00:05:06.440 05:21:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 110761 ']' 00:05:06.440 05:21:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 110761 00:05:06.440 05:21:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:06.440 05:21:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.440 05:21:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110761 00:05:06.700 05:21:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.700 05:21:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.700 05:21:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110761' 00:05:06.700 killing process with pid 110761 00:05:06.700 05:21:06 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 110761 00:05:06.700 05:21:06 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 110761 00:05:06.960 00:05:06.960 real 0m1.105s 00:05:06.960 user 0m1.853s 00:05:06.960 sys 0m0.462s 00:05:06.960 05:21:06 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.960 05:21:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.960 ************************************ 00:05:06.960 END TEST spdkcli_tcp 00:05:06.960 ************************************ 00:05:06.960 05:21:06 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.960 05:21:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.961 05:21:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.961 05:21:06 -- common/autotest_common.sh@10 -- # set +x 00:05:06.961 ************************************ 00:05:06.961 START TEST dpdk_mem_utility 00:05:06.961 ************************************ 00:05:06.961 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.961 * Looking for test storage... 00:05:06.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:06.961 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:06.961 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:06.961 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.221 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.221 05:21:06 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:07.221 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.221 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.221 --rc genhtml_branch_coverage=1 00:05:07.221 --rc genhtml_function_coverage=1 00:05:07.221 --rc genhtml_legend=1 00:05:07.221 --rc geninfo_all_blocks=1 00:05:07.221 --rc geninfo_unexecuted_blocks=1 00:05:07.221 00:05:07.221 ' 00:05:07.221 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.221 --rc genhtml_branch_coverage=1 00:05:07.221 --rc genhtml_function_coverage=1 00:05:07.221 --rc genhtml_legend=1 00:05:07.221 --rc geninfo_all_blocks=1 00:05:07.221 --rc geninfo_unexecuted_blocks=1 00:05:07.221 00:05:07.221 ' 00:05:07.221 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.221 --rc genhtml_branch_coverage=1 00:05:07.221 --rc genhtml_function_coverage=1 00:05:07.221 --rc genhtml_legend=1 00:05:07.221 --rc geninfo_all_blocks=1 00:05:07.221 --rc geninfo_unexecuted_blocks=1 00:05:07.221 00:05:07.221 ' 00:05:07.221 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.221 --rc genhtml_branch_coverage=1 00:05:07.221 --rc genhtml_function_coverage=1 00:05:07.221 --rc genhtml_legend=1 00:05:07.221 --rc geninfo_all_blocks=1 00:05:07.221 --rc geninfo_unexecuted_blocks=1 00:05:07.221 00:05:07.221 ' 00:05:07.221 05:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:07.221 05:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=111057 00:05:07.221 05:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 111057 00:05:07.221 05:21:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.221 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 111057 ']' 00:05:07.221 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.221 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.221 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.221 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.221 05:21:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.221 [2024-12-13 05:21:07.046417] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:07.221 [2024-12-13 05:21:07.046467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111057 ] 00:05:07.221 [2024-12-13 05:21:07.119810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.221 [2024-12-13 05:21:07.142815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.482 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.482 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:07.482 05:21:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:07.482 05:21:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:07.482 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.482 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.482 { 00:05:07.482 "filename": "/tmp/spdk_mem_dump.txt" 00:05:07.482 } 00:05:07.482 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.482 05:21:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:07.482 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:07.482 1 heaps totaling size 818.000000 MiB 00:05:07.482 size: 818.000000 MiB heap id: 0 00:05:07.482 end heaps---------- 00:05:07.482 9 mempools totaling size 603.782043 MiB 00:05:07.482 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:07.482 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:07.482 size: 100.555481 MiB name: bdev_io_111057 00:05:07.482 size: 50.003479 MiB name: msgpool_111057 00:05:07.482 size: 36.509338 MiB name: fsdev_io_111057 00:05:07.482 size: 21.763794 MiB name: PDU_Pool 00:05:07.482 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:07.482 size: 4.133484 MiB name: evtpool_111057 00:05:07.482 size: 0.026123 MiB name: Session_Pool 00:05:07.482 end mempools------- 00:05:07.482 6 memzones totaling size 4.142822 MiB 00:05:07.482 size: 1.000366 MiB name: RG_ring_0_111057 00:05:07.482 size: 1.000366 MiB name: RG_ring_1_111057 00:05:07.482 size: 1.000366 MiB name: RG_ring_4_111057 00:05:07.482 size: 1.000366 MiB name: RG_ring_5_111057 00:05:07.482 size: 0.125366 MiB name: RG_ring_2_111057 00:05:07.482 size: 0.015991 MiB name: RG_ring_3_111057 00:05:07.482 end memzones------- 00:05:07.482 05:21:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:07.482 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:07.482 list of free elements. size: 10.852478 MiB 00:05:07.482 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:07.482 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:07.482 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:07.482 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:07.482 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:07.482 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:07.482 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:07.482 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:07.482 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:07.482 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:07.482 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:07.482 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:07.482 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:07.482 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:07.482 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:07.482 list of standard malloc elements. size: 199.218628 MiB 00:05:07.482 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:07.482 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:07.482 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:07.482 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:07.482 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:07.482 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:07.482 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:07.482 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:07.482 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:07.482 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:07.482 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:07.482 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:07.482 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:07.482 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:07.482 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:07.482 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:07.482 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:07.482 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:07.482 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:07.482 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:07.482 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:07.482 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:07.482 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:07.482 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:07.483 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:07.483 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:07.483 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:07.483 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:07.483 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:07.483 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:07.483 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:07.483 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:07.483 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:07.483 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:07.483 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:07.483 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:07.483 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:07.483 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:07.483 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:07.483 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:07.483 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:07.483 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:07.483 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:07.483 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:07.483 list of memzone associated elements. size: 607.928894 MiB 00:05:07.483 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:07.483 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:07.483 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:07.483 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:07.483 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:07.483 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_111057_0 00:05:07.483 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:07.483 associated memzone info: size: 48.002930 MiB name: MP_msgpool_111057_0 00:05:07.483 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:07.483 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_111057_0 00:05:07.483 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:07.483 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:07.483 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:07.483 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:07.483 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:07.483 associated memzone info: size: 3.000122 MiB name: MP_evtpool_111057_0 00:05:07.483 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:07.483 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_111057 00:05:07.483 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:07.483 associated memzone info: size: 1.007996 MiB name: MP_evtpool_111057 00:05:07.483 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:07.483 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:07.483 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:07.483 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:07.483 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:07.483 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:07.483 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:07.483 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:07.483 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:07.483 associated memzone info: size: 1.000366 MiB name: RG_ring_0_111057 00:05:07.483 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:07.483 associated memzone info: size: 1.000366 MiB name: RG_ring_1_111057 00:05:07.483 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:07.483 associated memzone info: size: 1.000366 MiB name: RG_ring_4_111057 00:05:07.483 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:07.483 associated memzone info: size: 1.000366 MiB name: RG_ring_5_111057 00:05:07.483 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:07.483 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_111057 00:05:07.483 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:07.483 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_111057 00:05:07.483 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:07.483 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:07.483 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:07.483 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:07.483 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:07.483 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:07.483 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:07.483 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_111057 00:05:07.483 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:07.483 associated memzone info: size: 0.125366 MiB name: RG_ring_2_111057 00:05:07.483 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:07.483 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:07.483 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:07.483 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:07.483 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:07.483 associated memzone info: size: 0.015991 MiB name: RG_ring_3_111057 00:05:07.483 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:07.483 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:07.483 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:07.483 associated memzone info: size: 0.000183 MiB name: MP_msgpool_111057 00:05:07.483 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:07.483 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_111057 00:05:07.483 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:07.483 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_111057 00:05:07.483 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:07.483 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:07.483 05:21:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:07.483 05:21:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 111057 00:05:07.483 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 111057 ']' 00:05:07.483 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 111057 00:05:07.483 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:07.483 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.483 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111057 00:05:07.743 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.743 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.743 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111057' 00:05:07.743 killing process with pid 111057 00:05:07.743 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 111057 00:05:07.743 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 111057 00:05:08.002 00:05:08.002 real 0m0.981s 00:05:08.002 user 0m0.919s 00:05:08.002 sys 0m0.408s 00:05:08.002 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.002 05:21:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:08.002 ************************************ 00:05:08.002 END TEST dpdk_mem_utility 00:05:08.002 ************************************ 00:05:08.002 05:21:07 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:08.002 05:21:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.002 05:21:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.002 05:21:07 -- common/autotest_common.sh@10 -- # set +x 00:05:08.002 ************************************ 00:05:08.002 START TEST event 00:05:08.002 ************************************ 00:05:08.002 05:21:07 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:08.002 * Looking for test storage... 00:05:08.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:08.002 05:21:07 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.002 05:21:07 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.002 05:21:07 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.262 05:21:08 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.262 05:21:08 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.262 05:21:08 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.262 05:21:08 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.262 05:21:08 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.262 05:21:08 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.262 05:21:08 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.262 05:21:08 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.262 05:21:08 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.262 05:21:08 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.262 05:21:08 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.262 05:21:08 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.262 05:21:08 event -- scripts/common.sh@344 -- # case "$op" in 00:05:08.262 05:21:08 event -- scripts/common.sh@345 -- # : 1 00:05:08.262 05:21:08 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.262 05:21:08 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.262 05:21:08 event -- scripts/common.sh@365 -- # decimal 1 00:05:08.262 05:21:08 event -- scripts/common.sh@353 -- # local d=1 00:05:08.262 05:21:08 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.262 05:21:08 event -- scripts/common.sh@355 -- # echo 1 00:05:08.262 05:21:08 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.262 05:21:08 event -- scripts/common.sh@366 -- # decimal 2 00:05:08.262 05:21:08 event -- scripts/common.sh@353 -- # local d=2 00:05:08.262 05:21:08 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.262 05:21:08 event -- scripts/common.sh@355 -- # echo 2 00:05:08.262 05:21:08 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.262 05:21:08 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.262 05:21:08 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.262 05:21:08 event -- scripts/common.sh@368 -- # return 0 00:05:08.262 05:21:08 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.262 05:21:08 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.262 --rc genhtml_branch_coverage=1 00:05:08.262 --rc genhtml_function_coverage=1 00:05:08.262 --rc genhtml_legend=1 00:05:08.262 --rc geninfo_all_blocks=1 00:05:08.262 --rc geninfo_unexecuted_blocks=1 00:05:08.262 00:05:08.262 ' 00:05:08.262 05:21:08 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.262 --rc genhtml_branch_coverage=1 00:05:08.262 --rc genhtml_function_coverage=1 00:05:08.262 --rc genhtml_legend=1 00:05:08.262 --rc geninfo_all_blocks=1 00:05:08.262 --rc geninfo_unexecuted_blocks=1 00:05:08.262 00:05:08.262 ' 00:05:08.262 05:21:08 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.262 --rc genhtml_branch_coverage=1 00:05:08.262 --rc genhtml_function_coverage=1 00:05:08.262 --rc genhtml_legend=1 00:05:08.262 --rc geninfo_all_blocks=1 00:05:08.262 --rc geninfo_unexecuted_blocks=1 00:05:08.262 00:05:08.262 ' 00:05:08.262 05:21:08 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.262 --rc genhtml_branch_coverage=1 00:05:08.262 --rc genhtml_function_coverage=1 00:05:08.262 --rc genhtml_legend=1 00:05:08.262 --rc geninfo_all_blocks=1 00:05:08.262 --rc geninfo_unexecuted_blocks=1 00:05:08.262 00:05:08.262 ' 00:05:08.262 05:21:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:08.262 05:21:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:08.262 05:21:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.262 05:21:08 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:08.262 05:21:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.262 05:21:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.262 ************************************ 00:05:08.262 START TEST event_perf 00:05:08.262 ************************************ 00:05:08.262 05:21:08 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.262 Running I/O for 1 seconds...[2024-12-13 05:21:08.102649] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:08.262 [2024-12-13 05:21:08.102717] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111271 ] 00:05:08.262 [2024-12-13 05:21:08.182003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.262 [2024-12-13 05:21:08.208638] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.262 [2024-12-13 05:21:08.208745] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.262 [2024-12-13 05:21:08.208784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.262 [2024-12-13 05:21:08.208785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.641 Running I/O for 1 seconds... 00:05:09.641 lcore 0: 205192 00:05:09.641 lcore 1: 205192 00:05:09.641 lcore 2: 205192 00:05:09.641 lcore 3: 205192 00:05:09.641 done. 00:05:09.641 00:05:09.641 real 0m1.164s 00:05:09.641 user 0m4.087s 00:05:09.641 sys 0m0.073s 00:05:09.641 05:21:09 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.641 05:21:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.641 ************************************ 00:05:09.641 END TEST event_perf 00:05:09.641 ************************************ 00:05:09.641 05:21:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:09.641 05:21:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:09.641 05:21:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.641 05:21:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.641 ************************************ 00:05:09.641 START TEST event_reactor 00:05:09.641 ************************************ 00:05:09.641 05:21:09 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:09.641 [2024-12-13 05:21:09.332279] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:09.642 [2024-12-13 05:21:09.332349] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111685 ] 00:05:09.642 [2024-12-13 05:21:09.413131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.642 [2024-12-13 05:21:09.434667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.578 test_start 00:05:10.578 oneshot 00:05:10.578 tick 100 00:05:10.578 tick 100 00:05:10.578 tick 250 00:05:10.578 tick 100 00:05:10.578 tick 100 00:05:10.578 tick 100 00:05:10.578 tick 250 00:05:10.578 tick 500 00:05:10.578 tick 100 00:05:10.578 tick 100 00:05:10.578 tick 250 00:05:10.578 tick 100 00:05:10.578 tick 100 00:05:10.578 test_end 00:05:10.578 00:05:10.578 real 0m1.157s 00:05:10.578 user 0m1.072s 00:05:10.578 sys 0m0.079s 00:05:10.578 05:21:10 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.578 05:21:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:10.578 ************************************ 00:05:10.578 END TEST event_reactor 00:05:10.578 ************************************ 00:05:10.578 05:21:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.578 05:21:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:10.578 05:21:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.578 05:21:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.578 ************************************ 00:05:10.578 START TEST event_reactor_perf 00:05:10.578 ************************************ 00:05:10.578 05:21:10 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.578 [2024-12-13 05:21:10.562668] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:10.578 [2024-12-13 05:21:10.562735] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112009 ] 00:05:10.837 [2024-12-13 05:21:10.645236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.837 [2024-12-13 05:21:10.667185] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.774 test_start 00:05:11.774 test_end 00:05:11.774 Performance: 513020 events per second 00:05:11.774 00:05:11.774 real 0m1.158s 00:05:11.774 user 0m1.069s 00:05:11.774 sys 0m0.085s 00:05:11.774 05:21:11 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.775 05:21:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:11.775 ************************************ 00:05:11.775 END TEST event_reactor_perf 00:05:11.775 ************************************ 00:05:11.775 05:21:11 event -- event/event.sh@49 -- # uname -s 00:05:11.775 05:21:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:11.775 05:21:11 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:11.775 05:21:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.775 05:21:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.775 05:21:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.775 ************************************ 00:05:11.775 START TEST event_scheduler 00:05:11.775 ************************************ 00:05:11.775 05:21:11 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:12.034 * Looking for test storage... 00:05:12.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:12.034 05:21:11 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.034 05:21:11 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.034 05:21:11 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.034 05:21:11 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.034 05:21:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:12.034 05:21:11 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.034 05:21:11 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:12.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.034 --rc genhtml_branch_coverage=1 00:05:12.034 --rc genhtml_function_coverage=1 00:05:12.034 --rc genhtml_legend=1 00:05:12.034 --rc geninfo_all_blocks=1 00:05:12.034 --rc geninfo_unexecuted_blocks=1 00:05:12.034 00:05:12.034 ' 00:05:12.034 05:21:11 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:12.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.034 --rc genhtml_branch_coverage=1 00:05:12.034 --rc genhtml_function_coverage=1 00:05:12.034 --rc genhtml_legend=1 00:05:12.034 --rc geninfo_all_blocks=1 00:05:12.034 --rc geninfo_unexecuted_blocks=1 00:05:12.034 00:05:12.034 ' 00:05:12.034 05:21:11 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:12.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.034 --rc genhtml_branch_coverage=1 00:05:12.034 --rc genhtml_function_coverage=1 00:05:12.034 --rc genhtml_legend=1 00:05:12.034 --rc geninfo_all_blocks=1 00:05:12.034 --rc geninfo_unexecuted_blocks=1 00:05:12.034 00:05:12.034 ' 00:05:12.034 05:21:11 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:12.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.034 --rc genhtml_branch_coverage=1 00:05:12.034 --rc genhtml_function_coverage=1 00:05:12.034 --rc genhtml_legend=1 00:05:12.035 --rc geninfo_all_blocks=1 00:05:12.035 --rc geninfo_unexecuted_blocks=1 00:05:12.035 00:05:12.035 ' 00:05:12.035 05:21:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:12.035 05:21:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=112307 00:05:12.035 05:21:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.035 05:21:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:12.035 05:21:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 112307 00:05:12.035 05:21:11 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 112307 ']' 00:05:12.035 05:21:11 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.035 05:21:11 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.035 05:21:11 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.035 05:21:11 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.035 05:21:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.035 [2024-12-13 05:21:11.996196] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:12.035 [2024-12-13 05:21:11.996246] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112307 ] 00:05:12.294 [2024-12-13 05:21:12.072126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.294 [2024-12-13 05:21:12.097850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.294 [2024-12-13 05:21:12.097955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.294 [2024-12-13 05:21:12.098044] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.294 [2024-12-13 05:21:12.098043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.294 05:21:12 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.294 05:21:12 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:12.294 05:21:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:12.294 05:21:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.294 05:21:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.294 [2024-12-13 05:21:12.178768] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:12.294 [2024-12-13 05:21:12.178785] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:12.294 [2024-12-13 05:21:12.178794] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:12.294 [2024-12-13 05:21:12.178799] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:12.294 [2024-12-13 05:21:12.178805] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:12.294 05:21:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.294 05:21:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:12.294 05:21:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.294 05:21:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.294 [2024-12-13 05:21:12.247680] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:12.294 05:21:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.294 05:21:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:12.294 05:21:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.294 05:21:12 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.294 05:21:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.294 ************************************ 00:05:12.294 START TEST scheduler_create_thread 00:05:12.294 ************************************ 00:05:12.294 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:12.294 05:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:12.294 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.294 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.294 2 00:05:12.294 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.294 05:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:12.294 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.294 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.294 3 00:05:12.294 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.294 05:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:12.295 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.554 4 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.554 5 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.554 6 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.554 7 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.554 8 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.554 9 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.554 10 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.554 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.122 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.122 05:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:13.122 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.122 05:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.499 05:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.499 05:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:14.499 05:21:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:14.499 05:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.499 05:21:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.530 05:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.530 00:05:15.530 real 0m3.102s 00:05:15.530 user 0m0.023s 00:05:15.530 sys 0m0.007s 00:05:15.530 05:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.530 05:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.530 ************************************ 00:05:15.530 END TEST scheduler_create_thread 00:05:15.530 ************************************ 00:05:15.530 05:21:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:15.530 05:21:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 112307 00:05:15.530 05:21:15 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 112307 ']' 00:05:15.530 05:21:15 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 112307 00:05:15.530 05:21:15 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:15.530 05:21:15 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.530 05:21:15 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112307 00:05:15.530 05:21:15 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:15.530 05:21:15 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:15.530 05:21:15 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112307' 00:05:15.530 killing process with pid 112307 00:05:15.530 05:21:15 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 112307 00:05:15.530 05:21:15 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 112307 00:05:15.789 [2024-12-13 05:21:15.762783] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:16.049 00:05:16.049 real 0m4.173s 00:05:16.049 user 0m6.781s 00:05:16.049 sys 0m0.380s 00:05:16.049 05:21:15 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.049 05:21:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.049 ************************************ 00:05:16.049 END TEST event_scheduler 00:05:16.049 ************************************ 00:05:16.049 05:21:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:16.049 05:21:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:16.049 05:21:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.049 05:21:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.049 05:21:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.049 ************************************ 00:05:16.049 START TEST app_repeat 00:05:16.049 ************************************ 00:05:16.049 05:21:16 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=113051 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 113051' 00:05:16.049 Process app_repeat pid: 113051 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:16.049 spdk_app_start Round 0 00:05:16.049 05:21:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 113051 /var/tmp/spdk-nbd.sock 00:05:16.049 05:21:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 113051 ']' 00:05:16.049 05:21:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.049 05:21:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.049 05:21:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.049 05:21:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.049 05:21:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.049 [2024-12-13 05:21:16.061788] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:16.049 [2024-12-13 05:21:16.061841] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113051 ] 00:05:16.308 [2024-12-13 05:21:16.136045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.308 [2024-12-13 05:21:16.161376] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.308 [2024-12-13 05:21:16.161378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.308 05:21:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.308 05:21:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:16.308 05:21:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.567 Malloc0 00:05:16.567 05:21:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.826 Malloc1 00:05:16.826 05:21:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.826 05:21:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.085 /dev/nbd0 00:05:17.085 05:21:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.085 05:21:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.085 1+0 records in 00:05:17.085 1+0 records out 00:05:17.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182527 s, 22.4 MB/s 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.085 05:21:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.085 05:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.085 05:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.085 05:21:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.343 /dev/nbd1 00:05:17.343 05:21:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.343 05:21:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.343 1+0 records in 00:05:17.343 1+0 records out 00:05:17.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201872 s, 20.3 MB/s 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.343 05:21:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.344 05:21:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.344 05:21:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.344 05:21:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.344 05:21:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.344 05:21:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.344 05:21:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.344 { 00:05:17.344 "nbd_device": "/dev/nbd0", 00:05:17.344 "bdev_name": "Malloc0" 00:05:17.344 }, 00:05:17.344 { 00:05:17.344 "nbd_device": "/dev/nbd1", 00:05:17.344 "bdev_name": "Malloc1" 00:05:17.344 } 00:05:17.344 ]' 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.602 { 00:05:17.602 "nbd_device": "/dev/nbd0", 00:05:17.602 "bdev_name": "Malloc0" 00:05:17.602 }, 00:05:17.602 { 00:05:17.602 "nbd_device": "/dev/nbd1", 00:05:17.602 "bdev_name": "Malloc1" 00:05:17.602 } 00:05:17.602 ]' 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.602 /dev/nbd1' 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.602 /dev/nbd1' 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.602 256+0 records in 00:05:17.602 256+0 records out 00:05:17.602 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100563 s, 104 MB/s 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.602 256+0 records in 00:05:17.602 256+0 records out 00:05:17.602 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140318 s, 74.7 MB/s 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.602 256+0 records in 00:05:17.602 256+0 records out 00:05:17.602 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151404 s, 69.3 MB/s 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.602 05:21:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.603 05:21:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.861 05:21:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.861 05:21:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.861 05:21:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.861 05:21:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.861 05:21:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.861 05:21:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.861 05:21:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.861 05:21:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.861 05:21:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.861 05:21:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.120 05:21:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.120 05:21:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.120 05:21:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.120 05:21:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.120 05:21:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.120 05:21:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.120 05:21:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.120 05:21:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.120 05:21:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.120 05:21:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.120 05:21:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.120 05:21:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.120 05:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.120 05:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.380 05:21:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.380 05:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.380 05:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.380 05:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.380 05:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.380 05:21:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.380 05:21:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.380 05:21:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.380 05:21:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.380 05:21:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.380 05:21:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.639 [2024-12-13 05:21:18.490598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.639 [2024-12-13 05:21:18.510406] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.639 [2024-12-13 05:21:18.510409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.639 [2024-12-13 05:21:18.550658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.639 [2024-12-13 05:21:18.550695] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.930 05:21:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.930 05:21:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:21.930 spdk_app_start Round 1 00:05:21.930 05:21:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 113051 /var/tmp/spdk-nbd.sock 00:05:21.930 05:21:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 113051 ']' 00:05:21.930 05:21:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.930 05:21:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.930 05:21:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.930 05:21:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.930 05:21:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.930 05:21:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.930 05:21:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:21.930 05:21:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.930 Malloc0 00:05:21.930 05:21:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.189 Malloc1 00:05:22.189 05:21:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.189 05:21:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.189 /dev/nbd0 00:05:22.448 05:21:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.448 05:21:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.448 05:21:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:22.448 05:21:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.448 05:21:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.448 05:21:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.448 05:21:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:22.448 05:21:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.448 05:21:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.448 05:21:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.448 05:21:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.448 1+0 records in 00:05:22.448 1+0 records out 00:05:22.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183599 s, 22.3 MB/s 00:05:22.448 05:21:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.448 05:21:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.449 05:21:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.449 05:21:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.449 05:21:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.449 05:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.449 05:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.449 05:21:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.449 /dev/nbd1 00:05:22.449 05:21:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.449 05:21:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.449 05:21:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:22.449 05:21:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.449 05:21:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.449 05:21:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.449 05:21:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:22.449 05:21:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.449 05:21:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.449 05:21:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.449 05:21:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.449 1+0 records in 00:05:22.449 1+0 records out 00:05:22.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197228 s, 20.8 MB/s 00:05:22.708 05:21:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.708 05:21:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.708 05:21:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.708 05:21:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.708 05:21:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.708 { 00:05:22.708 "nbd_device": "/dev/nbd0", 00:05:22.708 "bdev_name": "Malloc0" 00:05:22.708 }, 00:05:22.708 { 00:05:22.708 "nbd_device": "/dev/nbd1", 00:05:22.708 "bdev_name": "Malloc1" 00:05:22.708 } 00:05:22.708 ]' 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.708 { 00:05:22.708 "nbd_device": "/dev/nbd0", 00:05:22.708 "bdev_name": "Malloc0" 00:05:22.708 }, 00:05:22.708 { 00:05:22.708 "nbd_device": "/dev/nbd1", 00:05:22.708 "bdev_name": "Malloc1" 00:05:22.708 } 00:05:22.708 ]' 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.708 /dev/nbd1' 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.708 /dev/nbd1' 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.708 05:21:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.967 256+0 records in 00:05:22.967 256+0 records out 00:05:22.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101458 s, 103 MB/s 00:05:22.967 05:21:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.967 05:21:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.967 256+0 records in 00:05:22.967 256+0 records out 00:05:22.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013648 s, 76.8 MB/s 00:05:22.967 05:21:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.967 05:21:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.967 256+0 records in 00:05:22.967 256+0 records out 00:05:22.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150687 s, 69.6 MB/s 00:05:22.967 05:21:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.967 05:21:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.967 05:21:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.967 05:21:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.967 05:21:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.967 05:21:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.967 05:21:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.967 05:21:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.967 05:21:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.968 05:21:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.968 05:21:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.968 05:21:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.968 05:21:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.968 05:21:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.968 05:21:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.968 05:21:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.968 05:21:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.968 05:21:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.968 05:21:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.227 05:21:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.227 05:21:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.227 05:21:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.227 05:21:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.227 05:21:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.227 05:21:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.227 05:21:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.486 05:21:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.486 05:21:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.486 05:21:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.486 05:21:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.486 05:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.486 05:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.486 05:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.486 05:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.486 05:21:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.486 05:21:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.486 05:21:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.486 05:21:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.486 05:21:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.745 05:21:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.004 [2024-12-13 05:21:23.823642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.004 [2024-12-13 05:21:23.843353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.004 [2024-12-13 05:21:23.843354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.004 [2024-12-13 05:21:23.884461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.004 [2024-12-13 05:21:23.884515] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.291 05:21:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:27.291 05:21:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:27.291 spdk_app_start Round 2 00:05:27.291 05:21:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 113051 /var/tmp/spdk-nbd.sock 00:05:27.291 05:21:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 113051 ']' 00:05:27.291 05:21:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.291 05:21:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.291 05:21:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.291 05:21:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.291 05:21:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.291 05:21:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.291 05:21:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:27.291 05:21:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.291 Malloc0 00:05:27.291 05:21:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.291 Malloc1 00:05:27.291 05:21:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.291 05:21:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.291 05:21:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.292 05:21:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.292 05:21:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.292 05:21:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.292 05:21:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.292 05:21:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.292 05:21:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.292 05:21:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.292 05:21:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.292 05:21:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.292 05:21:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.292 05:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.292 05:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.292 05:21:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.550 /dev/nbd0 00:05:27.550 05:21:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.550 05:21:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.550 05:21:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:27.550 05:21:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.550 05:21:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.551 05:21:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.551 05:21:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:27.551 05:21:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.551 05:21:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.551 05:21:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.551 05:21:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.551 1+0 records in 00:05:27.551 1+0 records out 00:05:27.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295041 s, 13.9 MB/s 00:05:27.551 05:21:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.551 05:21:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.551 05:21:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.551 05:21:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.551 05:21:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.551 05:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.551 05:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.551 05:21:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.810 /dev/nbd1 00:05:27.810 05:21:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.810 05:21:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.810 1+0 records in 00:05:27.810 1+0 records out 00:05:27.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227173 s, 18.0 MB/s 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.810 05:21:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.810 05:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.810 05:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.810 05:21:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.810 05:21:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.810 05:21:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.069 05:21:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.069 { 00:05:28.069 "nbd_device": "/dev/nbd0", 00:05:28.069 "bdev_name": "Malloc0" 00:05:28.069 }, 00:05:28.069 { 00:05:28.069 "nbd_device": "/dev/nbd1", 00:05:28.069 "bdev_name": "Malloc1" 00:05:28.069 } 00:05:28.069 ]' 00:05:28.069 05:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.069 { 00:05:28.069 "nbd_device": "/dev/nbd0", 00:05:28.069 "bdev_name": "Malloc0" 00:05:28.069 }, 00:05:28.069 { 00:05:28.069 "nbd_device": "/dev/nbd1", 00:05:28.069 "bdev_name": "Malloc1" 00:05:28.069 } 00:05:28.069 ]' 00:05:28.069 05:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.069 /dev/nbd1' 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.069 /dev/nbd1' 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.069 256+0 records in 00:05:28.069 256+0 records out 00:05:28.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107982 s, 97.1 MB/s 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.069 256+0 records in 00:05:28.069 256+0 records out 00:05:28.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139587 s, 75.1 MB/s 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.069 256+0 records in 00:05:28.069 256+0 records out 00:05:28.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144672 s, 72.5 MB/s 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.069 05:21:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.328 05:21:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.587 05:21:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.587 05:21:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.587 05:21:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.587 05:21:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.587 05:21:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.587 05:21:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.587 05:21:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.587 05:21:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.587 05:21:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.587 05:21:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.587 05:21:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.846 05:21:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.846 05:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.846 05:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.846 05:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.846 05:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.846 05:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.846 05:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.846 05:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.846 05:21:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.846 05:21:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.846 05:21:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.846 05:21:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.846 05:21:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.105 05:21:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.105 [2024-12-13 05:21:29.119387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.364 [2024-12-13 05:21:29.139855] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.364 [2024-12-13 05:21:29.139856] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.364 [2024-12-13 05:21:29.180319] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.364 [2024-12-13 05:21:29.180356] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.651 05:21:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 113051 /var/tmp/spdk-nbd.sock 00:05:32.651 05:21:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 113051 ']' 00:05:32.651 05:21:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.651 05:21:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.651 05:21:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.651 05:21:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.651 05:21:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:32.651 05:21:32 event.app_repeat -- event/event.sh@39 -- # killprocess 113051 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 113051 ']' 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 113051 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113051 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113051' 00:05:32.651 killing process with pid 113051 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@973 -- # kill 113051 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@978 -- # wait 113051 00:05:32.651 spdk_app_start is called in Round 0. 00:05:32.651 Shutdown signal received, stop current app iteration 00:05:32.651 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:05:32.651 spdk_app_start is called in Round 1. 00:05:32.651 Shutdown signal received, stop current app iteration 00:05:32.651 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:05:32.651 spdk_app_start is called in Round 2. 00:05:32.651 Shutdown signal received, stop current app iteration 00:05:32.651 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:05:32.651 spdk_app_start is called in Round 3. 00:05:32.651 Shutdown signal received, stop current app iteration 00:05:32.651 05:21:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:32.651 05:21:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:32.651 00:05:32.651 real 0m16.348s 00:05:32.651 user 0m36.009s 00:05:32.651 sys 0m2.583s 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.651 05:21:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.651 ************************************ 00:05:32.651 END TEST app_repeat 00:05:32.651 ************************************ 00:05:32.651 05:21:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:32.651 05:21:32 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:32.651 05:21:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.651 05:21:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.652 05:21:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.652 ************************************ 00:05:32.652 START TEST cpu_locks 00:05:32.652 ************************************ 00:05:32.652 05:21:32 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:32.652 * Looking for test storage... 00:05:32.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:32.652 05:21:32 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.652 05:21:32 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.652 05:21:32 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.652 05:21:32 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.652 05:21:32 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:32.652 05:21:32 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.652 05:21:32 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.652 --rc genhtml_branch_coverage=1 00:05:32.652 --rc genhtml_function_coverage=1 00:05:32.652 --rc genhtml_legend=1 00:05:32.652 --rc geninfo_all_blocks=1 00:05:32.652 --rc geninfo_unexecuted_blocks=1 00:05:32.652 00:05:32.652 ' 00:05:32.652 05:21:32 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.652 --rc genhtml_branch_coverage=1 00:05:32.652 --rc genhtml_function_coverage=1 00:05:32.652 --rc genhtml_legend=1 00:05:32.652 --rc geninfo_all_blocks=1 00:05:32.652 --rc geninfo_unexecuted_blocks=1 00:05:32.652 00:05:32.652 ' 00:05:32.652 05:21:32 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.652 --rc genhtml_branch_coverage=1 00:05:32.652 --rc genhtml_function_coverage=1 00:05:32.652 --rc genhtml_legend=1 00:05:32.652 --rc geninfo_all_blocks=1 00:05:32.652 --rc geninfo_unexecuted_blocks=1 00:05:32.652 00:05:32.652 ' 00:05:32.652 05:21:32 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.652 --rc genhtml_branch_coverage=1 00:05:32.652 --rc genhtml_function_coverage=1 00:05:32.652 --rc genhtml_legend=1 00:05:32.652 --rc geninfo_all_blocks=1 00:05:32.652 --rc geninfo_unexecuted_blocks=1 00:05:32.652 00:05:32.652 ' 00:05:32.652 05:21:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:32.652 05:21:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:32.652 05:21:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:32.652 05:21:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:32.652 05:21:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.652 05:21:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.652 05:21:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.652 ************************************ 00:05:32.652 START TEST default_locks 00:05:32.652 ************************************ 00:05:32.652 05:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:32.652 05:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=116150 00:05:32.652 05:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 116150 00:05:32.652 05:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.652 05:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 116150 ']' 00:05:32.652 05:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.652 05:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.652 05:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.652 05:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.652 05:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.911 [2024-12-13 05:21:32.714029] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:32.911 [2024-12-13 05:21:32.714070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116150 ] 00:05:32.911 [2024-12-13 05:21:32.786072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.911 [2024-12-13 05:21:32.808627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.171 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.171 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:33.171 05:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 116150 00:05:33.171 05:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 116150 00:05:33.171 05:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.429 lslocks: write error 00:05:33.429 05:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 116150 00:05:33.429 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 116150 ']' 00:05:33.429 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 116150 00:05:33.429 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:33.429 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.429 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116150 00:05:33.429 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.429 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.429 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116150' 00:05:33.429 killing process with pid 116150 00:05:33.429 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 116150 00:05:33.429 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 116150 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 116150 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 116150 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 116150 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 116150 ']' 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (116150) - No such process 00:05:33.687 ERROR: process (pid: 116150) is no longer running 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:33.687 00:05:33.687 real 0m1.006s 00:05:33.687 user 0m0.938s 00:05:33.687 sys 0m0.499s 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.687 05:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.687 ************************************ 00:05:33.687 END TEST default_locks 00:05:33.687 ************************************ 00:05:33.946 05:21:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:33.946 05:21:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.946 05:21:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.946 05:21:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.946 ************************************ 00:05:33.946 START TEST default_locks_via_rpc 00:05:33.946 ************************************ 00:05:33.946 05:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:33.946 05:21:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=116358 00:05:33.946 05:21:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 116358 00:05:33.946 05:21:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.946 05:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 116358 ']' 00:05:33.946 05:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.946 05:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.946 05:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.946 05:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.946 05:21:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.946 [2024-12-13 05:21:33.787718] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:33.946 [2024-12-13 05:21:33.787756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116358 ] 00:05:33.946 [2024-12-13 05:21:33.862490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.946 [2024-12-13 05:21:33.885167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.204 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.205 05:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 116358 00:05:34.205 05:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 116358 00:05:34.205 05:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.772 05:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 116358 00:05:34.772 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 116358 ']' 00:05:34.772 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 116358 00:05:34.772 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:34.772 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.772 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116358 00:05:34.772 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.772 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.772 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116358' 00:05:34.772 killing process with pid 116358 00:05:34.772 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 116358 00:05:34.772 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 116358 00:05:35.032 00:05:35.032 real 0m1.171s 00:05:35.032 user 0m1.122s 00:05:35.032 sys 0m0.564s 00:05:35.032 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.032 05:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.032 ************************************ 00:05:35.032 END TEST default_locks_via_rpc 00:05:35.032 ************************************ 00:05:35.032 05:21:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:35.032 05:21:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.032 05:21:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.032 05:21:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.032 ************************************ 00:05:35.032 START TEST non_locking_app_on_locked_coremask 00:05:35.032 ************************************ 00:05:35.032 05:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:35.032 05:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=116539 00:05:35.032 05:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 116539 /var/tmp/spdk.sock 00:05:35.032 05:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.032 05:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116539 ']' 00:05:35.032 05:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.032 05:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.032 05:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.032 05:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.032 05:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.032 [2024-12-13 05:21:35.032294] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:35.032 [2024-12-13 05:21:35.032332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116539 ] 00:05:35.291 [2024-12-13 05:21:35.107038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.291 [2024-12-13 05:21:35.129661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.549 05:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.549 05:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.549 05:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=116662 00:05:35.549 05:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 116662 /var/tmp/spdk2.sock 00:05:35.549 05:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:35.549 05:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116662 ']' 00:05:35.549 05:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.549 05:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.549 05:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.549 05:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.549 05:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.549 [2024-12-13 05:21:35.385414] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:35.549 [2024-12-13 05:21:35.385472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116662 ] 00:05:35.549 [2024-12-13 05:21:35.470993] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.549 [2024-12-13 05:21:35.471013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.549 [2024-12-13 05:21:35.513282] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.485 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.485 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.485 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 116539 00:05:36.485 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 116539 00:05:36.485 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.744 lslocks: write error 00:05:36.744 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 116539 00:05:36.744 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116539 ']' 00:05:36.744 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 116539 00:05:36.744 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.744 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.744 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116539 00:05:36.744 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.744 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.744 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116539' 00:05:36.744 killing process with pid 116539 00:05:36.744 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 116539 00:05:36.744 05:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 116539 00:05:37.312 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 116662 00:05:37.312 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116662 ']' 00:05:37.312 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 116662 00:05:37.312 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:37.312 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.312 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116662 00:05:37.312 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.312 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.312 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116662' 00:05:37.312 killing process with pid 116662 00:05:37.312 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 116662 00:05:37.312 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 116662 00:05:37.571 00:05:37.571 real 0m2.528s 00:05:37.571 user 0m2.651s 00:05:37.572 sys 0m0.854s 00:05:37.572 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.572 05:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.572 ************************************ 00:05:37.572 END TEST non_locking_app_on_locked_coremask 00:05:37.572 ************************************ 00:05:37.572 05:21:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:37.572 05:21:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.572 05:21:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.572 05:21:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.572 ************************************ 00:05:37.572 START TEST locking_app_on_unlocked_coremask 00:05:37.572 ************************************ 00:05:37.572 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:37.572 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=116956 00:05:37.572 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 116956 /var/tmp/spdk.sock 00:05:37.572 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:37.572 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116956 ']' 00:05:37.572 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.572 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.572 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.572 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.572 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.831 [2024-12-13 05:21:37.630478] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:37.831 [2024-12-13 05:21:37.630518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116956 ] 00:05:37.831 [2024-12-13 05:21:37.705728] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.831 [2024-12-13 05:21:37.705756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.831 [2024-12-13 05:21:37.728581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.090 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.090 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.090 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=117141 00:05:38.090 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 117141 /var/tmp/spdk2.sock 00:05:38.090 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.090 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117141 ']' 00:05:38.090 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.090 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.090 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.090 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.090 05:21:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.090 [2024-12-13 05:21:37.983858] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:38.090 [2024-12-13 05:21:37.983905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117141 ] 00:05:38.090 [2024-12-13 05:21:38.071132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.349 [2024-12-13 05:21:38.117646] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.917 05:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.917 05:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.917 05:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 117141 00:05:38.917 05:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 117141 00:05:38.917 05:21:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.484 lslocks: write error 00:05:39.484 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 116956 00:05:39.485 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116956 ']' 00:05:39.485 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 116956 00:05:39.485 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.485 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.485 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116956 00:05:39.485 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.485 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.485 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116956' 00:05:39.485 killing process with pid 116956 00:05:39.485 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 116956 00:05:39.485 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 116956 00:05:40.053 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 117141 00:05:40.053 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 117141 ']' 00:05:40.053 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 117141 00:05:40.053 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.053 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.053 05:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117141 00:05:40.053 05:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.053 05:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.053 05:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117141' 00:05:40.053 killing process with pid 117141 00:05:40.053 05:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 117141 00:05:40.053 05:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 117141 00:05:40.312 00:05:40.312 real 0m2.735s 00:05:40.312 user 0m2.850s 00:05:40.312 sys 0m0.946s 00:05:40.312 05:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.312 05:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.312 ************************************ 00:05:40.312 END TEST locking_app_on_unlocked_coremask 00:05:40.312 ************************************ 00:05:40.571 05:21:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:40.571 05:21:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.571 05:21:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.571 05:21:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.571 ************************************ 00:05:40.571 START TEST locking_app_on_locked_coremask 00:05:40.571 ************************************ 00:05:40.571 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:40.571 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=117495 00:05:40.571 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 117495 /var/tmp/spdk.sock 00:05:40.571 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.571 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117495 ']' 00:05:40.571 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.571 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.571 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.571 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.571 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.571 [2024-12-13 05:21:40.439048] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:40.571 [2024-12-13 05:21:40.439094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117495 ] 00:05:40.571 [2024-12-13 05:21:40.516626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.571 [2024-12-13 05:21:40.538933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.830 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.830 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=117629 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 117629 /var/tmp/spdk2.sock 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 117629 /var/tmp/spdk2.sock 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 117629 /var/tmp/spdk2.sock 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117629 ']' 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.831 05:21:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.831 [2024-12-13 05:21:40.810490] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:40.831 [2024-12-13 05:21:40.810549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117629 ] 00:05:41.089 [2024-12-13 05:21:40.907098] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 117495 has claimed it. 00:05:41.090 [2024-12-13 05:21:40.907134] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:41.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (117629) - No such process 00:05:41.657 ERROR: process (pid: 117629) is no longer running 00:05:41.657 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.657 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:41.657 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:41.657 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.657 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:41.657 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.657 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 117495 00:05:41.657 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 117495 00:05:41.657 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.916 lslocks: write error 00:05:41.916 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 117495 00:05:41.916 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 117495 ']' 00:05:41.916 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 117495 00:05:41.916 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:41.916 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.916 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117495 00:05:41.916 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.916 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.916 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117495' 00:05:41.916 killing process with pid 117495 00:05:41.916 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 117495 00:05:41.916 05:21:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 117495 00:05:42.484 00:05:42.484 real 0m1.829s 00:05:42.484 user 0m1.949s 00:05:42.484 sys 0m0.645s 00:05:42.484 05:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.484 05:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.484 ************************************ 00:05:42.484 END TEST locking_app_on_locked_coremask 00:05:42.484 ************************************ 00:05:42.484 05:21:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:42.484 05:21:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.484 05:21:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.484 05:21:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.484 ************************************ 00:05:42.484 START TEST locking_overlapped_coremask 00:05:42.484 ************************************ 00:05:42.484 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:42.484 05:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=117886 00:05:42.484 05:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 117886 /var/tmp/spdk.sock 00:05:42.484 05:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:42.484 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 117886 ']' 00:05:42.484 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.484 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.484 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.484 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.484 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.484 [2024-12-13 05:21:42.337872] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:42.484 [2024-12-13 05:21:42.337914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117886 ] 00:05:42.484 [2024-12-13 05:21:42.410143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.484 [2024-12-13 05:21:42.435295] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.484 [2024-12-13 05:21:42.435404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.484 [2024-12-13 05:21:42.435405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=117900 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 117900 /var/tmp/spdk2.sock 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 117900 /var/tmp/spdk2.sock 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 117900 /var/tmp/spdk2.sock 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 117900 ']' 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.743 05:21:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.743 [2024-12-13 05:21:42.671992] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:42.743 [2024-12-13 05:21:42.672032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117900 ] 00:05:43.003 [2024-12-13 05:21:42.762161] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117886 has claimed it. 00:05:43.003 [2024-12-13 05:21:42.762198] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (117900) - No such process 00:05:43.571 ERROR: process (pid: 117900) is no longer running 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 117886 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 117886 ']' 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 117886 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117886 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117886' 00:05:43.571 killing process with pid 117886 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 117886 00:05:43.571 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 117886 00:05:43.830 00:05:43.830 real 0m1.379s 00:05:43.830 user 0m3.833s 00:05:43.830 sys 0m0.377s 00:05:43.830 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.830 05:21:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.830 ************************************ 00:05:43.830 END TEST locking_overlapped_coremask 00:05:43.830 ************************************ 00:05:43.830 05:21:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:43.830 05:21:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.830 05:21:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.830 05:21:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.830 ************************************ 00:05:43.830 START TEST locking_overlapped_coremask_via_rpc 00:05:43.830 ************************************ 00:05:43.830 05:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:43.830 05:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=118150 00:05:43.830 05:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 118150 /var/tmp/spdk.sock 00:05:43.830 05:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:43.830 05:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 118150 ']' 00:05:43.830 05:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.830 05:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.830 05:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.830 05:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.830 05:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.830 [2024-12-13 05:21:43.789344] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:43.830 [2024-12-13 05:21:43.789387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118150 ] 00:05:44.089 [2024-12-13 05:21:43.862114] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.089 [2024-12-13 05:21:43.862138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.089 [2024-12-13 05:21:43.884504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.089 [2024-12-13 05:21:43.884609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.089 [2024-12-13 05:21:43.884610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.089 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.089 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:44.089 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=118155 00:05:44.089 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 118155 /var/tmp/spdk2.sock 00:05:44.089 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:44.089 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 118155 ']' 00:05:44.089 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.089 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.089 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.089 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.089 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.348 [2024-12-13 05:21:44.137749] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:44.348 [2024-12-13 05:21:44.137792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118155 ] 00:05:44.348 [2024-12-13 05:21:44.228413] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.348 [2024-12-13 05:21:44.228440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.348 [2024-12-13 05:21:44.277283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.348 [2024-12-13 05:21:44.277396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.348 [2024-12-13 05:21:44.277397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.290 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.290 [2024-12-13 05:21:44.983520] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 118150 has claimed it. 00:05:45.290 request: 00:05:45.290 { 00:05:45.290 "method": "framework_enable_cpumask_locks", 00:05:45.290 "req_id": 1 00:05:45.290 } 00:05:45.290 Got JSON-RPC error response 00:05:45.290 response: 00:05:45.290 { 00:05:45.290 "code": -32603, 00:05:45.290 "message": "Failed to claim CPU core: 2" 00:05:45.290 } 00:05:45.291 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:45.291 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:45.291 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.291 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.291 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.291 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 118150 /var/tmp/spdk.sock 00:05:45.291 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 118150 ']' 00:05:45.291 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.291 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.291 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.291 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.291 05:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.291 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.291 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.291 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 118155 /var/tmp/spdk2.sock 00:05:45.291 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 118155 ']' 00:05:45.291 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.291 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.291 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.291 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.291 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.554 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.554 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.554 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:45.554 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:45.554 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:45.554 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:45.554 00:05:45.554 real 0m1.663s 00:05:45.554 user 0m0.818s 00:05:45.554 sys 0m0.123s 00:05:45.554 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.554 05:21:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.554 ************************************ 00:05:45.554 END TEST locking_overlapped_coremask_via_rpc 00:05:45.554 ************************************ 00:05:45.554 05:21:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:45.554 05:21:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 118150 ]] 00:05:45.554 05:21:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 118150 00:05:45.554 05:21:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 118150 ']' 00:05:45.554 05:21:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 118150 00:05:45.554 05:21:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:45.554 05:21:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.554 05:21:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118150 00:05:45.554 05:21:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.554 05:21:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.554 05:21:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118150' 00:05:45.554 killing process with pid 118150 00:05:45.554 05:21:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 118150 00:05:45.554 05:21:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 118150 00:05:45.813 05:21:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 118155 ]] 00:05:45.813 05:21:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 118155 00:05:45.813 05:21:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 118155 ']' 00:05:45.813 05:21:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 118155 00:05:45.813 05:21:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:45.813 05:21:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.813 05:21:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118155 00:05:46.071 05:21:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:46.071 05:21:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:46.071 05:21:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118155' 00:05:46.071 killing process with pid 118155 00:05:46.071 05:21:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 118155 00:05:46.071 05:21:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 118155 00:05:46.331 05:21:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:46.331 05:21:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:46.331 05:21:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 118150 ]] 00:05:46.331 05:21:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 118150 00:05:46.331 05:21:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 118150 ']' 00:05:46.331 05:21:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 118150 00:05:46.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (118150) - No such process 00:05:46.331 05:21:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 118150 is not found' 00:05:46.331 Process with pid 118150 is not found 00:05:46.331 05:21:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 118155 ]] 00:05:46.331 05:21:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 118155 00:05:46.331 05:21:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 118155 ']' 00:05:46.331 05:21:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 118155 00:05:46.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (118155) - No such process 00:05:46.331 05:21:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 118155 is not found' 00:05:46.331 Process with pid 118155 is not found 00:05:46.331 05:21:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:46.331 00:05:46.331 real 0m13.703s 00:05:46.331 user 0m23.852s 00:05:46.331 sys 0m4.958s 00:05:46.331 05:21:46 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.331 05:21:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.331 ************************************ 00:05:46.331 END TEST cpu_locks 00:05:46.331 ************************************ 00:05:46.331 00:05:46.331 real 0m38.316s 00:05:46.331 user 1m13.145s 00:05:46.331 sys 0m8.535s 00:05:46.331 05:21:46 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.331 05:21:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.331 ************************************ 00:05:46.331 END TEST event 00:05:46.331 ************************************ 00:05:46.331 05:21:46 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:46.331 05:21:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.331 05:21:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.331 05:21:46 -- common/autotest_common.sh@10 -- # set +x 00:05:46.331 ************************************ 00:05:46.331 START TEST thread 00:05:46.331 ************************************ 00:05:46.331 05:21:46 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:46.331 * Looking for test storage... 00:05:46.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:46.591 05:21:46 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.591 05:21:46 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.591 05:21:46 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.591 05:21:46 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.591 05:21:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.591 05:21:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.591 05:21:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.591 05:21:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.591 05:21:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.591 05:21:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.591 05:21:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.591 05:21:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.591 05:21:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.591 05:21:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.591 05:21:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.591 05:21:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:46.591 05:21:46 thread -- scripts/common.sh@345 -- # : 1 00:05:46.591 05:21:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.591 05:21:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.591 05:21:46 thread -- scripts/common.sh@365 -- # decimal 1 00:05:46.591 05:21:46 thread -- scripts/common.sh@353 -- # local d=1 00:05:46.591 05:21:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.591 05:21:46 thread -- scripts/common.sh@355 -- # echo 1 00:05:46.591 05:21:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.591 05:21:46 thread -- scripts/common.sh@366 -- # decimal 2 00:05:46.591 05:21:46 thread -- scripts/common.sh@353 -- # local d=2 00:05:46.591 05:21:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.591 05:21:46 thread -- scripts/common.sh@355 -- # echo 2 00:05:46.591 05:21:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.591 05:21:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.591 05:21:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.591 05:21:46 thread -- scripts/common.sh@368 -- # return 0 00:05:46.591 05:21:46 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.591 05:21:46 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.591 --rc genhtml_branch_coverage=1 00:05:46.591 --rc genhtml_function_coverage=1 00:05:46.591 --rc genhtml_legend=1 00:05:46.591 --rc geninfo_all_blocks=1 00:05:46.591 --rc geninfo_unexecuted_blocks=1 00:05:46.591 00:05:46.591 ' 00:05:46.591 05:21:46 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.591 --rc genhtml_branch_coverage=1 00:05:46.591 --rc genhtml_function_coverage=1 00:05:46.591 --rc genhtml_legend=1 00:05:46.591 --rc geninfo_all_blocks=1 00:05:46.591 --rc geninfo_unexecuted_blocks=1 00:05:46.591 00:05:46.591 ' 00:05:46.591 05:21:46 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.591 --rc genhtml_branch_coverage=1 00:05:46.591 --rc genhtml_function_coverage=1 00:05:46.591 --rc genhtml_legend=1 00:05:46.591 --rc geninfo_all_blocks=1 00:05:46.591 --rc geninfo_unexecuted_blocks=1 00:05:46.591 00:05:46.591 ' 00:05:46.591 05:21:46 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.591 --rc genhtml_branch_coverage=1 00:05:46.591 --rc genhtml_function_coverage=1 00:05:46.591 --rc genhtml_legend=1 00:05:46.591 --rc geninfo_all_blocks=1 00:05:46.591 --rc geninfo_unexecuted_blocks=1 00:05:46.591 00:05:46.591 ' 00:05:46.591 05:21:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:46.591 05:21:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:46.591 05:21:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.591 05:21:46 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.591 ************************************ 00:05:46.591 START TEST thread_poller_perf 00:05:46.591 ************************************ 00:05:46.591 05:21:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:46.591 [2024-12-13 05:21:46.480960] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:46.591 [2024-12-13 05:21:46.481032] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118702 ] 00:05:46.591 [2024-12-13 05:21:46.558256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.591 [2024-12-13 05:21:46.580666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.591 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:47.969 [2024-12-13T04:21:47.984Z] ====================================== 00:05:47.969 [2024-12-13T04:21:47.984Z] busy:2105542688 (cyc) 00:05:47.969 [2024-12-13T04:21:47.984Z] total_run_count: 425000 00:05:47.969 [2024-12-13T04:21:47.984Z] tsc_hz: 2100000000 (cyc) 00:05:47.969 [2024-12-13T04:21:47.984Z] ====================================== 00:05:47.969 [2024-12-13T04:21:47.984Z] poller_cost: 4954 (cyc), 2359 (nsec) 00:05:47.969 00:05:47.969 real 0m1.155s 00:05:47.969 user 0m1.073s 00:05:47.969 sys 0m0.077s 00:05:47.969 05:21:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.969 05:21:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.969 ************************************ 00:05:47.969 END TEST thread_poller_perf 00:05:47.969 ************************************ 00:05:47.969 05:21:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:47.969 05:21:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:47.969 05:21:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.969 05:21:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.969 ************************************ 00:05:47.969 START TEST thread_poller_perf 00:05:47.969 ************************************ 00:05:47.969 05:21:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:47.969 [2024-12-13 05:21:47.709776] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:47.969 [2024-12-13 05:21:47.709845] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118951 ] 00:05:47.969 [2024-12-13 05:21:47.789854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.969 [2024-12-13 05:21:47.811027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.969 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:48.907 [2024-12-13T04:21:48.922Z] ====================================== 00:05:48.907 [2024-12-13T04:21:48.922Z] busy:2101386612 (cyc) 00:05:48.907 [2024-12-13T04:21:48.922Z] total_run_count: 5127000 00:05:48.907 [2024-12-13T04:21:48.922Z] tsc_hz: 2100000000 (cyc) 00:05:48.907 [2024-12-13T04:21:48.923Z] ====================================== 00:05:48.908 [2024-12-13T04:21:48.923Z] poller_cost: 409 (cyc), 194 (nsec) 00:05:48.908 00:05:48.908 real 0m1.156s 00:05:48.908 user 0m1.074s 00:05:48.908 sys 0m0.078s 00:05:48.908 05:21:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.908 05:21:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.908 ************************************ 00:05:48.908 END TEST thread_poller_perf 00:05:48.908 ************************************ 00:05:48.908 05:21:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:48.908 00:05:48.908 real 0m2.623s 00:05:48.908 user 0m2.298s 00:05:48.908 sys 0m0.338s 00:05:48.908 05:21:48 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.908 05:21:48 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.908 ************************************ 00:05:48.908 END TEST thread 00:05:48.908 ************************************ 00:05:48.908 05:21:48 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:48.908 05:21:48 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:48.908 05:21:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.908 05:21:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.908 05:21:48 -- common/autotest_common.sh@10 -- # set +x 00:05:49.167 ************************************ 00:05:49.167 START TEST app_cmdline 00:05:49.167 ************************************ 00:05:49.167 05:21:48 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:49.167 * Looking for test storage... 00:05:49.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:49.167 05:21:49 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:49.167 05:21:49 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:49.167 05:21:49 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:49.167 05:21:49 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.167 05:21:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:49.167 05:21:49 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.167 05:21:49 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:49.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.168 --rc genhtml_branch_coverage=1 00:05:49.168 --rc genhtml_function_coverage=1 00:05:49.168 --rc genhtml_legend=1 00:05:49.168 --rc geninfo_all_blocks=1 00:05:49.168 --rc geninfo_unexecuted_blocks=1 00:05:49.168 00:05:49.168 ' 00:05:49.168 05:21:49 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:49.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.168 --rc genhtml_branch_coverage=1 00:05:49.168 --rc genhtml_function_coverage=1 00:05:49.168 --rc genhtml_legend=1 00:05:49.168 --rc geninfo_all_blocks=1 00:05:49.168 --rc geninfo_unexecuted_blocks=1 00:05:49.168 00:05:49.168 ' 00:05:49.168 05:21:49 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:49.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.168 --rc genhtml_branch_coverage=1 00:05:49.168 --rc genhtml_function_coverage=1 00:05:49.168 --rc genhtml_legend=1 00:05:49.168 --rc geninfo_all_blocks=1 00:05:49.168 --rc geninfo_unexecuted_blocks=1 00:05:49.168 00:05:49.168 ' 00:05:49.168 05:21:49 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:49.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.168 --rc genhtml_branch_coverage=1 00:05:49.168 --rc genhtml_function_coverage=1 00:05:49.168 --rc genhtml_legend=1 00:05:49.168 --rc geninfo_all_blocks=1 00:05:49.168 --rc geninfo_unexecuted_blocks=1 00:05:49.168 00:05:49.168 ' 00:05:49.168 05:21:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:49.168 05:21:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=119246 00:05:49.168 05:21:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 119246 00:05:49.168 05:21:49 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:49.168 05:21:49 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 119246 ']' 00:05:49.168 05:21:49 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.168 05:21:49 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.168 05:21:49 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.168 05:21:49 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.168 05:21:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:49.168 [2024-12-13 05:21:49.176746] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:49.168 [2024-12-13 05:21:49.176793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119246 ] 00:05:49.427 [2024-12-13 05:21:49.247955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.427 [2024-12-13 05:21:49.270286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.686 05:21:49 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.686 05:21:49 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:49.686 05:21:49 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:49.686 { 00:05:49.686 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:05:49.686 "fields": { 00:05:49.686 "major": 25, 00:05:49.686 "minor": 1, 00:05:49.686 "patch": 0, 00:05:49.686 "suffix": "-pre", 00:05:49.686 "commit": "e01cb43b8" 00:05:49.686 } 00:05:49.686 } 00:05:49.686 05:21:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:49.686 05:21:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:49.686 05:21:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:49.686 05:21:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:49.686 05:21:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:49.686 05:21:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:49.686 05:21:49 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.686 05:21:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:49.686 05:21:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:49.686 05:21:49 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.945 05:21:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:49.945 05:21:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:49.945 05:21:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:49.945 05:21:49 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:49.945 05:21:49 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:49.946 request: 00:05:49.946 { 00:05:49.946 "method": "env_dpdk_get_mem_stats", 00:05:49.946 "req_id": 1 00:05:49.946 } 00:05:49.946 Got JSON-RPC error response 00:05:49.946 response: 00:05:49.946 { 00:05:49.946 "code": -32601, 00:05:49.946 "message": "Method not found" 00:05:49.946 } 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.946 05:21:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 119246 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 119246 ']' 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 119246 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.946 05:21:49 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119246 00:05:50.205 05:21:49 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.205 05:21:49 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.205 05:21:49 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119246' 00:05:50.205 killing process with pid 119246 00:05:50.205 05:21:49 app_cmdline -- common/autotest_common.sh@973 -- # kill 119246 00:05:50.205 05:21:49 app_cmdline -- common/autotest_common.sh@978 -- # wait 119246 00:05:50.464 00:05:50.464 real 0m1.305s 00:05:50.464 user 0m1.528s 00:05:50.464 sys 0m0.436s 00:05:50.464 05:21:50 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.464 05:21:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:50.464 ************************************ 00:05:50.464 END TEST app_cmdline 00:05:50.464 ************************************ 00:05:50.464 05:21:50 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:50.464 05:21:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.464 05:21:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.464 05:21:50 -- common/autotest_common.sh@10 -- # set +x 00:05:50.464 ************************************ 00:05:50.464 START TEST version 00:05:50.464 ************************************ 00:05:50.464 05:21:50 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:50.464 * Looking for test storage... 00:05:50.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:50.464 05:21:50 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.464 05:21:50 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.464 05:21:50 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.464 05:21:50 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.464 05:21:50 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.464 05:21:50 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.464 05:21:50 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.464 05:21:50 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.464 05:21:50 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.724 05:21:50 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.724 05:21:50 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.724 05:21:50 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.724 05:21:50 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.724 05:21:50 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.724 05:21:50 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.724 05:21:50 version -- scripts/common.sh@344 -- # case "$op" in 00:05:50.724 05:21:50 version -- scripts/common.sh@345 -- # : 1 00:05:50.724 05:21:50 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.724 05:21:50 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.724 05:21:50 version -- scripts/common.sh@365 -- # decimal 1 00:05:50.724 05:21:50 version -- scripts/common.sh@353 -- # local d=1 00:05:50.724 05:21:50 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.724 05:21:50 version -- scripts/common.sh@355 -- # echo 1 00:05:50.724 05:21:50 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.724 05:21:50 version -- scripts/common.sh@366 -- # decimal 2 00:05:50.724 05:21:50 version -- scripts/common.sh@353 -- # local d=2 00:05:50.724 05:21:50 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.724 05:21:50 version -- scripts/common.sh@355 -- # echo 2 00:05:50.724 05:21:50 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.724 05:21:50 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.724 05:21:50 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.724 05:21:50 version -- scripts/common.sh@368 -- # return 0 00:05:50.724 05:21:50 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.724 05:21:50 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.724 --rc genhtml_branch_coverage=1 00:05:50.724 --rc genhtml_function_coverage=1 00:05:50.724 --rc genhtml_legend=1 00:05:50.724 --rc geninfo_all_blocks=1 00:05:50.724 --rc geninfo_unexecuted_blocks=1 00:05:50.724 00:05:50.724 ' 00:05:50.724 05:21:50 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.724 --rc genhtml_branch_coverage=1 00:05:50.724 --rc genhtml_function_coverage=1 00:05:50.724 --rc genhtml_legend=1 00:05:50.724 --rc geninfo_all_blocks=1 00:05:50.724 --rc geninfo_unexecuted_blocks=1 00:05:50.724 00:05:50.724 ' 00:05:50.724 05:21:50 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.724 --rc genhtml_branch_coverage=1 00:05:50.724 --rc genhtml_function_coverage=1 00:05:50.724 --rc genhtml_legend=1 00:05:50.724 --rc geninfo_all_blocks=1 00:05:50.724 --rc geninfo_unexecuted_blocks=1 00:05:50.724 00:05:50.724 ' 00:05:50.724 05:21:50 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.724 --rc genhtml_branch_coverage=1 00:05:50.724 --rc genhtml_function_coverage=1 00:05:50.724 --rc genhtml_legend=1 00:05:50.724 --rc geninfo_all_blocks=1 00:05:50.724 --rc geninfo_unexecuted_blocks=1 00:05:50.724 00:05:50.724 ' 00:05:50.724 05:21:50 version -- app/version.sh@17 -- # get_header_version major 00:05:50.724 05:21:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:50.724 05:21:50 version -- app/version.sh@14 -- # cut -f2 00:05:50.724 05:21:50 version -- app/version.sh@14 -- # tr -d '"' 00:05:50.724 05:21:50 version -- app/version.sh@17 -- # major=25 00:05:50.724 05:21:50 version -- app/version.sh@18 -- # get_header_version minor 00:05:50.724 05:21:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:50.724 05:21:50 version -- app/version.sh@14 -- # cut -f2 00:05:50.724 05:21:50 version -- app/version.sh@14 -- # tr -d '"' 00:05:50.724 05:21:50 version -- app/version.sh@18 -- # minor=1 00:05:50.724 05:21:50 version -- app/version.sh@19 -- # get_header_version patch 00:05:50.724 05:21:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:50.724 05:21:50 version -- app/version.sh@14 -- # cut -f2 00:05:50.724 05:21:50 version -- app/version.sh@14 -- # tr -d '"' 00:05:50.724 05:21:50 version -- app/version.sh@19 -- # patch=0 00:05:50.724 05:21:50 version -- app/version.sh@20 -- # get_header_version suffix 00:05:50.724 05:21:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:50.724 05:21:50 version -- app/version.sh@14 -- # cut -f2 00:05:50.724 05:21:50 version -- app/version.sh@14 -- # tr -d '"' 00:05:50.724 05:21:50 version -- app/version.sh@20 -- # suffix=-pre 00:05:50.724 05:21:50 version -- app/version.sh@22 -- # version=25.1 00:05:50.724 05:21:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:50.724 05:21:50 version -- app/version.sh@28 -- # version=25.1rc0 00:05:50.724 05:21:50 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:50.724 05:21:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:50.724 05:21:50 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:50.724 05:21:50 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:50.724 00:05:50.724 real 0m0.242s 00:05:50.724 user 0m0.158s 00:05:50.724 sys 0m0.127s 00:05:50.724 05:21:50 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.724 05:21:50 version -- common/autotest_common.sh@10 -- # set +x 00:05:50.724 ************************************ 00:05:50.724 END TEST version 00:05:50.724 ************************************ 00:05:50.724 05:21:50 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:50.724 05:21:50 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:50.724 05:21:50 -- spdk/autotest.sh@194 -- # uname -s 00:05:50.724 05:21:50 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:50.724 05:21:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:50.724 05:21:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:50.724 05:21:50 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:50.724 05:21:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:50.724 05:21:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:50.724 05:21:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:50.724 05:21:50 -- common/autotest_common.sh@10 -- # set +x 00:05:50.724 05:21:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:50.724 05:21:50 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:50.725 05:21:50 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:50.725 05:21:50 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:50.725 05:21:50 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:50.725 05:21:50 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:50.725 05:21:50 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:50.725 05:21:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:50.725 05:21:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.725 05:21:50 -- common/autotest_common.sh@10 -- # set +x 00:05:50.725 ************************************ 00:05:50.725 START TEST nvmf_tcp 00:05:50.725 ************************************ 00:05:50.725 05:21:50 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:50.984 * Looking for test storage... 00:05:50.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:50.984 05:21:50 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.984 05:21:50 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.984 05:21:50 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.984 05:21:50 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.984 05:21:50 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:50.984 05:21:50 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.984 05:21:50 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.984 --rc genhtml_branch_coverage=1 00:05:50.984 --rc genhtml_function_coverage=1 00:05:50.984 --rc genhtml_legend=1 00:05:50.984 --rc geninfo_all_blocks=1 00:05:50.984 --rc geninfo_unexecuted_blocks=1 00:05:50.984 00:05:50.984 ' 00:05:50.984 05:21:50 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.984 --rc genhtml_branch_coverage=1 00:05:50.984 --rc genhtml_function_coverage=1 00:05:50.984 --rc genhtml_legend=1 00:05:50.984 --rc geninfo_all_blocks=1 00:05:50.984 --rc geninfo_unexecuted_blocks=1 00:05:50.984 00:05:50.984 ' 00:05:50.984 05:21:50 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.984 --rc genhtml_branch_coverage=1 00:05:50.984 --rc genhtml_function_coverage=1 00:05:50.984 --rc genhtml_legend=1 00:05:50.984 --rc geninfo_all_blocks=1 00:05:50.984 --rc geninfo_unexecuted_blocks=1 00:05:50.984 00:05:50.984 ' 00:05:50.984 05:21:50 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.985 --rc genhtml_branch_coverage=1 00:05:50.985 --rc genhtml_function_coverage=1 00:05:50.985 --rc genhtml_legend=1 00:05:50.985 --rc geninfo_all_blocks=1 00:05:50.985 --rc geninfo_unexecuted_blocks=1 00:05:50.985 00:05:50.985 ' 00:05:50.985 05:21:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:50.985 05:21:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:50.985 05:21:50 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:50.985 05:21:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:50.985 05:21:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.985 05:21:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.985 ************************************ 00:05:50.985 START TEST nvmf_target_core 00:05:50.985 ************************************ 00:05:50.985 05:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:50.985 * Looking for test storage... 00:05:50.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:50.985 05:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.985 05:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.985 05:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.244 --rc genhtml_branch_coverage=1 00:05:51.244 --rc genhtml_function_coverage=1 00:05:51.244 --rc genhtml_legend=1 00:05:51.244 --rc geninfo_all_blocks=1 00:05:51.244 --rc geninfo_unexecuted_blocks=1 00:05:51.244 00:05:51.244 ' 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.244 --rc genhtml_branch_coverage=1 00:05:51.244 --rc genhtml_function_coverage=1 00:05:51.244 --rc genhtml_legend=1 00:05:51.244 --rc geninfo_all_blocks=1 00:05:51.244 --rc geninfo_unexecuted_blocks=1 00:05:51.244 00:05:51.244 ' 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.244 --rc genhtml_branch_coverage=1 00:05:51.244 --rc genhtml_function_coverage=1 00:05:51.244 --rc genhtml_legend=1 00:05:51.244 --rc geninfo_all_blocks=1 00:05:51.244 --rc geninfo_unexecuted_blocks=1 00:05:51.244 00:05:51.244 ' 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.244 --rc genhtml_branch_coverage=1 00:05:51.244 --rc genhtml_function_coverage=1 00:05:51.244 --rc genhtml_legend=1 00:05:51.244 --rc geninfo_all_blocks=1 00:05:51.244 --rc geninfo_unexecuted_blocks=1 00:05:51.244 00:05:51.244 ' 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.244 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:51.245 ************************************ 00:05:51.245 START TEST nvmf_abort 00:05:51.245 ************************************ 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:51.245 * Looking for test storage... 00:05:51.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.245 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.504 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.504 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.504 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.504 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.504 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.504 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.504 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.504 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.505 --rc genhtml_branch_coverage=1 00:05:51.505 --rc genhtml_function_coverage=1 00:05:51.505 --rc genhtml_legend=1 00:05:51.505 --rc geninfo_all_blocks=1 00:05:51.505 --rc geninfo_unexecuted_blocks=1 00:05:51.505 00:05:51.505 ' 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.505 --rc genhtml_branch_coverage=1 00:05:51.505 --rc genhtml_function_coverage=1 00:05:51.505 --rc genhtml_legend=1 00:05:51.505 --rc geninfo_all_blocks=1 00:05:51.505 --rc geninfo_unexecuted_blocks=1 00:05:51.505 00:05:51.505 ' 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.505 --rc genhtml_branch_coverage=1 00:05:51.505 --rc genhtml_function_coverage=1 00:05:51.505 --rc genhtml_legend=1 00:05:51.505 --rc geninfo_all_blocks=1 00:05:51.505 --rc geninfo_unexecuted_blocks=1 00:05:51.505 00:05:51.505 ' 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.505 --rc genhtml_branch_coverage=1 00:05:51.505 --rc genhtml_function_coverage=1 00:05:51.505 --rc genhtml_legend=1 00:05:51.505 --rc geninfo_all_blocks=1 00:05:51.505 --rc geninfo_unexecuted_blocks=1 00:05:51.505 00:05:51.505 ' 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.505 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.506 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.506 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:51.506 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:51.506 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:51.506 05:21:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:58.080 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:58.080 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:58.080 Found net devices under 0000:af:00.0: cvl_0_0 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:58.080 Found net devices under 0000:af:00.1: cvl_0_1 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:58.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:58.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:05:58.080 00:05:58.080 --- 10.0.0.2 ping statistics --- 00:05:58.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:58.080 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:58.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:58.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:05:58.080 00:05:58.080 --- 10.0.0.1 ping statistics --- 00:05:58.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:58.080 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:58.080 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=122866 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 122866 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 122866 ']' 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:58.081 [2024-12-13 05:21:57.480358] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:58.081 [2024-12-13 05:21:57.480399] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:58.081 [2024-12-13 05:21:57.539199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.081 [2024-12-13 05:21:57.563550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:58.081 [2024-12-13 05:21:57.563582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:58.081 [2024-12-13 05:21:57.563589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:58.081 [2024-12-13 05:21:57.563595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:58.081 [2024-12-13 05:21:57.563600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:58.081 [2024-12-13 05:21:57.564943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.081 [2024-12-13 05:21:57.565047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.081 [2024-12-13 05:21:57.565049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:58.081 [2024-12-13 05:21:57.696697] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:58.081 Malloc0 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:58.081 Delay0 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:58.081 [2024-12-13 05:21:57.781009] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.081 05:21:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:58.081 [2024-12-13 05:21:57.914333] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:59.987 Initializing NVMe Controllers 00:05:59.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:59.988 controller IO queue size 128 less than required 00:05:59.988 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:59.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:59.988 Initialization complete. Launching workers. 00:05:59.988 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38734 00:05:59.988 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38795, failed to submit 62 00:05:59.988 success 38738, unsuccessful 57, failed 0 00:05:59.988 05:21:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:59.988 05:21:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.988 05:21:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:59.988 05:21:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.988 05:21:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:59.988 05:21:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:59.988 05:21:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:59.988 05:21:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:59.988 05:21:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:59.988 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:59.988 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:59.988 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:00.247 rmmod nvme_tcp 00:06:00.247 rmmod nvme_fabrics 00:06:00.247 rmmod nvme_keyring 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 122866 ']' 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 122866 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 122866 ']' 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 122866 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122866 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122866' 00:06:00.247 killing process with pid 122866 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 122866 00:06:00.247 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 122866 00:06:00.507 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:00.507 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:00.507 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:00.507 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:00.507 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:00.507 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:00.507 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:00.507 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:00.507 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:00.507 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:00.507 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:00.507 05:22:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.413 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:02.413 00:06:02.413 real 0m11.212s 00:06:02.413 user 0m11.735s 00:06:02.413 sys 0m5.215s 00:06:02.413 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.413 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.413 ************************************ 00:06:02.413 END TEST nvmf_abort 00:06:02.413 ************************************ 00:06:02.413 05:22:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:02.413 05:22:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:02.413 05:22:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.413 05:22:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:02.674 ************************************ 00:06:02.674 START TEST nvmf_ns_hotplug_stress 00:06:02.674 ************************************ 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:02.674 * Looking for test storage... 00:06:02.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:02.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.674 --rc genhtml_branch_coverage=1 00:06:02.674 --rc genhtml_function_coverage=1 00:06:02.674 --rc genhtml_legend=1 00:06:02.674 --rc geninfo_all_blocks=1 00:06:02.674 --rc geninfo_unexecuted_blocks=1 00:06:02.674 00:06:02.674 ' 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:02.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.674 --rc genhtml_branch_coverage=1 00:06:02.674 --rc genhtml_function_coverage=1 00:06:02.674 --rc genhtml_legend=1 00:06:02.674 --rc geninfo_all_blocks=1 00:06:02.674 --rc geninfo_unexecuted_blocks=1 00:06:02.674 00:06:02.674 ' 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:02.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.674 --rc genhtml_branch_coverage=1 00:06:02.674 --rc genhtml_function_coverage=1 00:06:02.674 --rc genhtml_legend=1 00:06:02.674 --rc geninfo_all_blocks=1 00:06:02.674 --rc geninfo_unexecuted_blocks=1 00:06:02.674 00:06:02.674 ' 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:02.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.674 --rc genhtml_branch_coverage=1 00:06:02.674 --rc genhtml_function_coverage=1 00:06:02.674 --rc genhtml_legend=1 00:06:02.674 --rc geninfo_all_blocks=1 00:06:02.674 --rc geninfo_unexecuted_blocks=1 00:06:02.674 00:06:02.674 ' 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:02.674 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:02.675 05:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:09.245 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:09.245 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:09.245 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:09.246 Found net devices under 0000:af:00.0: cvl_0_0 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:09.246 Found net devices under 0000:af:00.1: cvl_0_1 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:09.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:09.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:06:09.246 00:06:09.246 --- 10.0.0.2 ping statistics --- 00:06:09.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.246 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:09.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:09.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:06:09.246 00:06:09.246 --- 10.0.0.1 ping statistics --- 00:06:09.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.246 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=126815 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 126815 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 126815 ']' 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:09.246 [2024-12-13 05:22:08.656094] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:09.246 [2024-12-13 05:22:08.656141] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:09.246 [2024-12-13 05:22:08.734158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.246 [2024-12-13 05:22:08.756177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:09.246 [2024-12-13 05:22:08.756211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:09.246 [2024-12-13 05:22:08.756218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:09.246 [2024-12-13 05:22:08.756224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:09.246 [2024-12-13 05:22:08.756228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:09.246 [2024-12-13 05:22:08.757501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.246 [2024-12-13 05:22:08.757609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.246 [2024-12-13 05:22:08.757610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:09.246 05:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:09.246 [2024-12-13 05:22:09.049670] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.246 05:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:09.503 05:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:09.504 [2024-12-13 05:22:09.475224] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:09.504 05:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:09.762 05:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:10.020 Malloc0 00:06:10.020 05:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:10.278 Delay0 00:06:10.278 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.536 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:10.536 NULL1 00:06:10.536 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:10.794 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=127286 00:06:10.794 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:10.794 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:10.794 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.054 05:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.312 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:11.312 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:11.312 true 00:06:11.570 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:11.570 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.570 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.829 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:11.829 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:12.087 true 00:06:12.087 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:12.087 05:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.346 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.605 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:12.605 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:12.605 true 00:06:12.863 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:12.863 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.863 05:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.122 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:13.122 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:13.381 true 00:06:13.381 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:13.381 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.641 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.899 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:13.899 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:13.899 true 00:06:14.158 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:14.158 05:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.159 05:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.417 05:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:14.417 05:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:14.676 true 00:06:14.676 05:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:14.676 05:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.934 05:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.192 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:15.192 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:15.449 true 00:06:15.449 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:15.449 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.707 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.707 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:15.708 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:15.966 true 00:06:15.966 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:15.966 05:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.225 05:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.484 05:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:16.484 05:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:16.743 true 00:06:16.743 05:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:16.743 05:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.001 05:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.001 05:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:17.001 05:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:17.260 true 00:06:17.260 05:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:17.260 05:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.519 05:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.778 05:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:17.778 05:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:18.037 true 00:06:18.037 05:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:18.037 05:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.295 05:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.554 05:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:18.554 05:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:18.554 true 00:06:18.554 05:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:18.554 05:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.812 05:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.071 05:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:19.071 05:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:19.330 true 00:06:19.330 05:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:19.330 05:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.589 05:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.848 05:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:19.848 05:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:19.848 true 00:06:19.848 05:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:19.848 05:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.106 05:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.365 05:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:20.365 05:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:20.623 true 00:06:20.623 05:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:20.623 05:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.882 05:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.140 05:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:21.140 05:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:21.140 true 00:06:21.140 05:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:21.140 05:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.399 05:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.658 05:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:21.658 05:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:21.917 true 00:06:21.917 05:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:21.917 05:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.176 05:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.435 05:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:22.435 05:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:22.435 true 00:06:22.694 05:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:22.694 05:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.694 05:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.953 05:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:22.953 05:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:23.212 true 00:06:23.212 05:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:23.212 05:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.471 05:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.729 05:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:23.729 05:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:23.988 true 00:06:23.988 05:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:23.988 05:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.246 05:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.246 05:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:24.246 05:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:24.505 true 00:06:24.505 05:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:24.505 05:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.764 05:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.023 05:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:25.023 05:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:25.281 true 00:06:25.281 05:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:25.281 05:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.538 05:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.538 05:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:25.538 05:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:25.796 true 00:06:25.796 05:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:25.796 05:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.054 05:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.312 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:26.312 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:26.570 true 00:06:26.570 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:26.570 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.828 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.087 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:27.087 05:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:27.087 true 00:06:27.087 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:27.087 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.346 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.605 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:27.605 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:27.865 true 00:06:27.865 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:27.865 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.124 05:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.383 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:28.383 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:28.383 true 00:06:28.383 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:28.383 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.643 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.902 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:28.902 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:29.162 true 00:06:29.162 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:29.162 05:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.421 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.681 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:29.681 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:29.681 true 00:06:29.681 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:29.681 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.940 05:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.199 05:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:30.199 05:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:30.458 true 00:06:30.458 05:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:30.458 05:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.717 05:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.976 05:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:30.976 05:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:30.976 true 00:06:30.976 05:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:30.976 05:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.239 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.505 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:31.505 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:31.764 true 00:06:31.764 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:31.764 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.023 05:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.023 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:32.023 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:32.282 true 00:06:32.282 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:32.282 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.541 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.800 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:32.800 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:33.059 true 00:06:33.059 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:33.059 05:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.318 05:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.318 05:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:33.318 05:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:33.577 true 00:06:33.577 05:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:33.577 05:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.837 05:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.096 05:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:34.096 05:22:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:34.355 true 00:06:34.355 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:34.355 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.614 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.614 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:34.614 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:34.874 true 00:06:34.874 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:34.874 05:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.133 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.391 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:35.391 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:35.650 true 00:06:35.650 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:35.650 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.915 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.915 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:35.915 05:22:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:36.175 true 00:06:36.175 05:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:36.175 05:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.434 05:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.692 05:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:36.692 05:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:36.952 true 00:06:36.952 05:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:36.952 05:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.211 05:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.211 05:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:37.211 05:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:37.470 true 00:06:37.470 05:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:37.470 05:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.730 05:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.989 05:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:37.989 05:22:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:38.248 true 00:06:38.248 05:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:38.248 05:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.507 05:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.507 05:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:38.507 05:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:38.766 true 00:06:38.766 05:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:38.766 05:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.025 05:22:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.285 05:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:39.285 05:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:39.544 true 00:06:39.544 05:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:39.544 05:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.803 05:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.803 05:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:39.803 05:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:40.062 true 00:06:40.062 05:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:40.062 05:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.321 05:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.580 05:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:40.580 05:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:40.839 true 00:06:40.839 05:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:40.839 05:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.098 05:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.098 Initializing NVMe Controllers 00:06:41.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:41.098 Controller IO queue size 128, less than required. 00:06:41.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:41.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:41.098 Initialization complete. Launching workers. 00:06:41.098 ======================================================== 00:06:41.098 Latency(us) 00:06:41.098 Device Information : IOPS MiB/s Average min max 00:06:41.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27629.40 13.49 4633.39 2293.41 44349.73 00:06:41.098 ======================================================== 00:06:41.098 Total : 27629.40 13.49 4633.39 2293.41 44349.73 00:06:41.098 00:06:41.098 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:41.098 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:41.358 true 00:06:41.358 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127286 00:06:41.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (127286) - No such process 00:06:41.358 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 127286 00:06:41.358 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.617 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.876 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:41.876 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:41.876 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:41.876 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:41.876 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:41.876 null0 00:06:42.135 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.135 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.135 05:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:42.135 null1 00:06:42.135 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.135 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.135 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:42.394 null2 00:06:42.395 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.395 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.395 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:42.654 null3 00:06:42.654 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.654 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.654 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:42.654 null4 00:06:42.913 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.913 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.913 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:42.913 null5 00:06:42.913 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.913 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.913 05:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:43.172 null6 00:06:43.172 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.172 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.172 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:43.433 null7 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.433 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 132820 132821 132824 132825 132827 132829 132831 132833 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.434 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.694 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.954 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.954 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.954 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.954 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.954 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.954 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.954 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.954 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.954 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.954 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.954 05:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.214 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.473 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.473 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.473 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.473 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.473 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.473 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.473 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.473 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.733 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.992 05:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.250 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.250 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.250 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.250 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.250 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.250 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.250 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.250 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.508 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.509 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.769 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.028 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.029 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:46.029 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.029 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.029 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.029 05:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.029 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:46.029 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.029 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.029 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.288 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.549 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.549 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.549 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.549 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:46.549 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.549 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.549 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.549 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.809 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:47.068 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:47.068 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:47.068 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:47.068 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:47.068 05:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.068 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:47.328 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:47.328 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.328 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:47.328 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:47.328 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.328 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:47.328 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:47.328 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:47.587 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.587 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.587 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.587 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.587 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.587 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.587 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.587 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.587 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.587 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.587 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:47.588 rmmod nvme_tcp 00:06:47.588 rmmod nvme_fabrics 00:06:47.588 rmmod nvme_keyring 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 126815 ']' 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 126815 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 126815 ']' 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 126815 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.588 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126815 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126815' 00:06:47.848 killing process with pid 126815 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 126815 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 126815 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:47.848 05:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.387 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:50.387 00:06:50.387 real 0m47.409s 00:06:50.387 user 3m22.548s 00:06:50.387 sys 0m17.155s 00:06:50.387 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.387 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.387 ************************************ 00:06:50.387 END TEST nvmf_ns_hotplug_stress 00:06:50.387 ************************************ 00:06:50.388 05:22:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:50.388 05:22:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.388 05:22:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.388 05:22:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:50.388 ************************************ 00:06:50.388 START TEST nvmf_delete_subsystem 00:06:50.388 ************************************ 00:06:50.388 05:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:50.388 * Looking for test storage... 00:06:50.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:50.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.388 --rc genhtml_branch_coverage=1 00:06:50.388 --rc genhtml_function_coverage=1 00:06:50.388 --rc genhtml_legend=1 00:06:50.388 --rc geninfo_all_blocks=1 00:06:50.388 --rc geninfo_unexecuted_blocks=1 00:06:50.388 00:06:50.388 ' 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:50.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.388 --rc genhtml_branch_coverage=1 00:06:50.388 --rc genhtml_function_coverage=1 00:06:50.388 --rc genhtml_legend=1 00:06:50.388 --rc geninfo_all_blocks=1 00:06:50.388 --rc geninfo_unexecuted_blocks=1 00:06:50.388 00:06:50.388 ' 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:50.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.388 --rc genhtml_branch_coverage=1 00:06:50.388 --rc genhtml_function_coverage=1 00:06:50.388 --rc genhtml_legend=1 00:06:50.388 --rc geninfo_all_blocks=1 00:06:50.388 --rc geninfo_unexecuted_blocks=1 00:06:50.388 00:06:50.388 ' 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:50.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.388 --rc genhtml_branch_coverage=1 00:06:50.388 --rc genhtml_function_coverage=1 00:06:50.388 --rc genhtml_legend=1 00:06:50.388 --rc geninfo_all_blocks=1 00:06:50.388 --rc geninfo_unexecuted_blocks=1 00:06:50.388 00:06:50.388 ' 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.388 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:50.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:50.389 05:22:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:56.977 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:56.977 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:56.977 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:56.978 Found net devices under 0000:af:00.0: cvl_0_0 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:56.978 Found net devices under 0000:af:00.1: cvl_0_1 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:56.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:06:56.978 00:06:56.978 --- 10.0.0.2 ping statistics --- 00:06:56.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.978 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:56.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:06:56.978 00:06:56.978 --- 10.0.0.1 ping statistics --- 00:06:56.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.978 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:06:56.978 05:22:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=137134 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 137134 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 137134 ']' 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.978 [2024-12-13 05:22:56.096653] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:56.978 [2024-12-13 05:22:56.096701] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.978 [2024-12-13 05:22:56.172332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.978 [2024-12-13 05:22:56.194659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.978 [2024-12-13 05:22:56.194694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.978 [2024-12-13 05:22:56.194701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.978 [2024-12-13 05:22:56.194706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.978 [2024-12-13 05:22:56.194711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.978 [2024-12-13 05:22:56.195756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.978 [2024-12-13 05:22:56.195759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.978 [2024-12-13 05:22:56.327858] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:56.978 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.979 [2024-12-13 05:22:56.348063] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.979 NULL1 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.979 Delay0 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=137188 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:56.979 05:22:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:56.979 [2024-12-13 05:22:56.459093] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:58.884 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:58.884 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.884 05:22:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Write completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 Read completed with error (sct=0, sc=8) 00:06:58.884 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 starting I/O failed: -6 00:06:58.885 starting I/O failed: -6 00:06:58.885 starting I/O failed: -6 00:06:58.885 starting I/O failed: -6 00:06:58.885 starting I/O failed: -6 00:06:58.885 starting I/O failed: -6 00:06:58.885 starting I/O failed: -6 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 starting I/O failed: -6 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 [2024-12-13 05:22:58.578284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f094800d4d0 is same with the state(6) to be set 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Write completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:58.885 Read completed with error (sct=0, sc=8) 00:06:59.822 [2024-12-13 05:22:59.552236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9260 is same with the state(6) to be set 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 [2024-12-13 05:22:59.578056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbc60 is same with the state(6) to be set 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Write completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.822 Read completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 [2024-12-13 05:22:59.578430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8505f0 is same with the state(6) to be set 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 [2024-12-13 05:22:59.580679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f094800d060 is same with the state(6) to be set 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Write completed with error (sct=0, sc=8) 00:06:59.823 Read completed with error (sct=0, sc=8) 00:06:59.823 [2024-12-13 05:22:59.581485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f094800d800 is same with the state(6) to be set 00:06:59.823 Initializing NVMe Controllers 00:06:59.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:59.823 Controller IO queue size 128, less than required. 00:06:59.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:59.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:59.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:59.823 Initialization complete. Launching workers. 00:06:59.823 ======================================================== 00:06:59.823 Latency(us) 00:06:59.823 Device Information : IOPS MiB/s Average min max 00:06:59.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.75 0.09 924560.63 340.86 1006922.62 00:06:59.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.30 0.08 902404.62 251.87 1009690.30 00:06:59.823 ======================================================== 00:06:59.823 Total : 344.05 0.17 913851.36 251.87 1009690.30 00:06:59.823 00:06:59.823 [2024-12-13 05:22:59.582056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f9260 (9): Bad file descriptor 00:06:59.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:59.823 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.823 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:59.823 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 137188 00:06:59.823 05:22:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 137188 00:07:00.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (137188) - No such process 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 137188 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 137188 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 137188 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.081 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:00.340 [2024-12-13 05:23:00.107600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=137832 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137832 00:07:00.340 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.340 [2024-12-13 05:23:00.190691] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:00.907 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.907 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137832 00:07:00.907 05:23:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.166 05:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.166 05:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137832 00:07:01.166 05:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.733 05:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.733 05:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137832 00:07:01.733 05:23:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.299 05:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.299 05:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137832 00:07:02.299 05:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.866 05:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.866 05:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137832 00:07:02.866 05:23:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.434 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.434 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137832 00:07:03.434 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.434 Initializing NVMe Controllers 00:07:03.434 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:03.434 Controller IO queue size 128, less than required. 00:07:03.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:03.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:03.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:03.434 Initialization complete. Launching workers. 00:07:03.434 ======================================================== 00:07:03.434 Latency(us) 00:07:03.434 Device Information : IOPS MiB/s Average min max 00:07:03.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001970.26 1000131.47 1041417.32 00:07:03.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005178.60 1000227.28 1041931.50 00:07:03.434 ======================================================== 00:07:03.434 Total : 256.00 0.12 1003574.43 1000131.47 1041931.50 00:07:03.434 00:07:03.693 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.693 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137832 00:07:03.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (137832) - No such process 00:07:03.693 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 137832 00:07:03.693 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:03.693 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:03.693 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:03.693 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:03.693 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:03.693 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:03.693 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:03.693 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:03.693 rmmod nvme_tcp 00:07:03.693 rmmod nvme_fabrics 00:07:03.693 rmmod nvme_keyring 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 137134 ']' 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 137134 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 137134 ']' 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 137134 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 137134 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 137134' 00:07:03.952 killing process with pid 137134 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 137134 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 137134 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:03.952 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:03.953 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:03.953 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:03.953 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:03.953 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:03.953 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:03.953 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:03.953 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:03.953 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.953 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.953 05:23:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.490 05:23:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:06.490 00:07:06.490 real 0m16.073s 00:07:06.490 user 0m29.304s 00:07:06.490 sys 0m5.307s 00:07:06.490 05:23:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.490 05:23:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.490 ************************************ 00:07:06.490 END TEST nvmf_delete_subsystem 00:07:06.490 ************************************ 00:07:06.490 05:23:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:06.490 05:23:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.490 05:23:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.490 05:23:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:06.490 ************************************ 00:07:06.490 START TEST nvmf_host_management 00:07:06.490 ************************************ 00:07:06.490 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:06.490 * Looking for test storage... 00:07:06.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.490 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:06.490 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:06.490 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:06.490 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:06.490 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.490 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.490 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.490 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:06.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.491 --rc genhtml_branch_coverage=1 00:07:06.491 --rc genhtml_function_coverage=1 00:07:06.491 --rc genhtml_legend=1 00:07:06.491 --rc geninfo_all_blocks=1 00:07:06.491 --rc geninfo_unexecuted_blocks=1 00:07:06.491 00:07:06.491 ' 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:06.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.491 --rc genhtml_branch_coverage=1 00:07:06.491 --rc genhtml_function_coverage=1 00:07:06.491 --rc genhtml_legend=1 00:07:06.491 --rc geninfo_all_blocks=1 00:07:06.491 --rc geninfo_unexecuted_blocks=1 00:07:06.491 00:07:06.491 ' 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:06.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.491 --rc genhtml_branch_coverage=1 00:07:06.491 --rc genhtml_function_coverage=1 00:07:06.491 --rc genhtml_legend=1 00:07:06.491 --rc geninfo_all_blocks=1 00:07:06.491 --rc geninfo_unexecuted_blocks=1 00:07:06.491 00:07:06.491 ' 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:06.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.491 --rc genhtml_branch_coverage=1 00:07:06.491 --rc genhtml_function_coverage=1 00:07:06.491 --rc genhtml_legend=1 00:07:06.491 --rc geninfo_all_blocks=1 00:07:06.491 --rc geninfo_unexecuted_blocks=1 00:07:06.491 00:07:06.491 ' 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.491 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.492 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.492 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:06.492 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:06.492 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:06.492 05:23:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.067 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.067 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:13.067 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:13.067 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:13.068 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:13.068 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:13.068 Found net devices under 0000:af:00.0: cvl_0_0 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:13.068 Found net devices under 0000:af:00.1: cvl_0_1 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.068 05:23:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.068 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.068 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.068 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:13.068 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.068 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.068 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.068 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:13.068 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:13.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:07:13.068 00:07:13.068 --- 10.0.0.2 ping statistics --- 00:07:13.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.068 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:07:13.068 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:07:13.068 00:07:13.068 --- 10.0.0.1 ping statistics --- 00:07:13.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.068 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:07:13.068 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.068 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:13.068 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:13.068 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=141982 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 141982 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 141982 ']' 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.069 [2024-12-13 05:23:12.304323] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:13.069 [2024-12-13 05:23:12.304371] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.069 [2024-12-13 05:23:12.384477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.069 [2024-12-13 05:23:12.408488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.069 [2024-12-13 05:23:12.408524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.069 [2024-12-13 05:23:12.408532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.069 [2024-12-13 05:23:12.408538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.069 [2024-12-13 05:23:12.408542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.069 [2024-12-13 05:23:12.409912] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.069 [2024-12-13 05:23:12.410017] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.069 [2024-12-13 05:23:12.410100] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.069 [2024-12-13 05:23:12.410101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.069 [2024-12-13 05:23:12.545792] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.069 Malloc0 00:07:13.069 [2024-12-13 05:23:12.627094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=142028 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 142028 /var/tmp/bdevperf.sock 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 142028 ']' 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:13.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:13.069 { 00:07:13.069 "params": { 00:07:13.069 "name": "Nvme$subsystem", 00:07:13.069 "trtype": "$TEST_TRANSPORT", 00:07:13.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:13.069 "adrfam": "ipv4", 00:07:13.069 "trsvcid": "$NVMF_PORT", 00:07:13.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:13.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:13.069 "hdgst": ${hdgst:-false}, 00:07:13.069 "ddgst": ${ddgst:-false} 00:07:13.069 }, 00:07:13.069 "method": "bdev_nvme_attach_controller" 00:07:13.069 } 00:07:13.069 EOF 00:07:13.069 )") 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:13.069 05:23:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:13.069 "params": { 00:07:13.069 "name": "Nvme0", 00:07:13.069 "trtype": "tcp", 00:07:13.069 "traddr": "10.0.0.2", 00:07:13.069 "adrfam": "ipv4", 00:07:13.069 "trsvcid": "4420", 00:07:13.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:13.069 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:13.069 "hdgst": false, 00:07:13.069 "ddgst": false 00:07:13.069 }, 00:07:13.069 "method": "bdev_nvme_attach_controller" 00:07:13.069 }' 00:07:13.069 [2024-12-13 05:23:12.716962] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:13.069 [2024-12-13 05:23:12.717006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142028 ] 00:07:13.069 [2024-12-13 05:23:12.792074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.069 [2024-12-13 05:23:12.814367] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.069 Running I/O for 10 seconds... 00:07:13.069 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.069 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:13.069 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:13.069 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.069 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.069 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.069 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:13.069 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:13.069 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:13.069 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:13.070 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:13.070 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:13.070 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:13.070 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:13.070 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:13.070 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:13.070 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.070 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.329 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.329 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=95 00:07:13.329 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 95 -ge 100 ']' 00:07:13.329 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=726 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 726 -ge 100 ']' 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.589 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.589 [2024-12-13 05:23:13.417722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6590 is same with the state(6) to be set 00:07:13.589 [2024-12-13 05:23:13.417778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6590 is same with the state(6) to be set 00:07:13.589 [2024-12-13 05:23:13.417786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6590 is same with the state(6) to be set 00:07:13.589 [2024-12-13 05:23:13.417797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6590 is same with the state(6) to be set 00:07:13.589 [2024-12-13 05:23:13.417803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6590 is same with the state(6) to be set 00:07:13.590 [2024-12-13 05:23:13.417809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6590 is same with the state(6) to be set 00:07:13.590 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.590 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:13.590 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.590 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.590 [2024-12-13 05:23:13.424106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.590 [2024-12-13 05:23:13.424694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.590 [2024-12-13 05:23:13.424701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.424992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.424999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.425008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.425014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.425022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.425028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.425036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.425042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.425050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.425056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.425064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.591 [2024-12-13 05:23:13.425071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.425165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.591 [2024-12-13 05:23:13.425175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.425182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.591 [2024-12-13 05:23:13.425188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.425196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.591 [2024-12-13 05:23:13.425202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.425211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.591 [2024-12-13 05:23:13.425217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.425224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febd40 is same with the state(6) to be set 00:07:13.591 [2024-12-13 05:23:13.426087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:13.591 task offset: 106496 on job bdev=Nvme0n1 fails 00:07:13.591 00:07:13.591 Latency(us) 00:07:13.591 [2024-12-13T04:23:13.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.591 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:13.591 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:13.591 Verification LBA range: start 0x0 length 0x400 00:07:13.591 Nvme0n1 : 0.41 2031.36 126.96 156.26 0.00 28476.09 1646.20 26464.06 00:07:13.591 [2024-12-13T04:23:13.606Z] =================================================================================================================== 00:07:13.591 [2024-12-13T04:23:13.606Z] Total : 2031.36 126.96 156.26 0.00 28476.09 1646.20 26464.06 00:07:13.591 [2024-12-13 05:23:13.428403] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.591 [2024-12-13 05:23:13.428424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1febd40 (9): Bad file descriptor 00:07:13.591 [2024-12-13 05:23:13.429482] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:13.591 [2024-12-13 05:23:13.429559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:13.591 [2024-12-13 05:23:13.429580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.591 [2024-12-13 05:23:13.429593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:13.591 [2024-12-13 05:23:13.429600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:13.591 [2024-12-13 05:23:13.429607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:13.592 [2024-12-13 05:23:13.429613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1febd40 00:07:13.592 [2024-12-13 05:23:13.429632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1febd40 (9): Bad file descriptor 00:07:13.592 [2024-12-13 05:23:13.429643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:13.592 [2024-12-13 05:23:13.429650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:13.592 [2024-12-13 05:23:13.429658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:13.592 [2024-12-13 05:23:13.429665] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:13.592 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.592 05:23:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:14.528 05:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 142028 00:07:14.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (142028) - No such process 00:07:14.528 05:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:14.529 05:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:14.529 05:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:14.529 05:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:14.529 05:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:14.529 05:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:14.529 05:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:14.529 05:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:14.529 { 00:07:14.529 "params": { 00:07:14.529 "name": "Nvme$subsystem", 00:07:14.529 "trtype": "$TEST_TRANSPORT", 00:07:14.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:14.529 "adrfam": "ipv4", 00:07:14.529 "trsvcid": "$NVMF_PORT", 00:07:14.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:14.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:14.529 "hdgst": ${hdgst:-false}, 00:07:14.529 "ddgst": ${ddgst:-false} 00:07:14.529 }, 00:07:14.529 "method": "bdev_nvme_attach_controller" 00:07:14.529 } 00:07:14.529 EOF 00:07:14.529 )") 00:07:14.529 05:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:14.529 05:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:14.529 05:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:14.529 05:23:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:14.529 "params": { 00:07:14.529 "name": "Nvme0", 00:07:14.529 "trtype": "tcp", 00:07:14.529 "traddr": "10.0.0.2", 00:07:14.529 "adrfam": "ipv4", 00:07:14.529 "trsvcid": "4420", 00:07:14.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:14.529 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:14.529 "hdgst": false, 00:07:14.529 "ddgst": false 00:07:14.529 }, 00:07:14.529 "method": "bdev_nvme_attach_controller" 00:07:14.529 }' 00:07:14.529 [2024-12-13 05:23:14.486175] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:14.529 [2024-12-13 05:23:14.486223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142464 ] 00:07:14.788 [2024-12-13 05:23:14.560982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.788 [2024-12-13 05:23:14.582185] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.788 Running I/O for 1 seconds... 00:07:16.184 2048.00 IOPS, 128.00 MiB/s 00:07:16.184 Latency(us) 00:07:16.184 [2024-12-13T04:23:16.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.184 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:16.184 Verification LBA range: start 0x0 length 0x400 00:07:16.184 Nvme0n1 : 1.01 2093.91 130.87 0.00 0.00 30083.38 5773.41 26464.06 00:07:16.184 [2024-12-13T04:23:16.199Z] =================================================================================================================== 00:07:16.184 [2024-12-13T04:23:16.199Z] Total : 2093.91 130.87 0.00 0.00 30083.38 5773.41 26464.06 00:07:16.184 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:16.184 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:16.184 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:16.184 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:16.184 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:16.184 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:16.184 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:16.184 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:16.184 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:16.184 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.184 05:23:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:16.184 rmmod nvme_tcp 00:07:16.184 rmmod nvme_fabrics 00:07:16.184 rmmod nvme_keyring 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 141982 ']' 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 141982 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 141982 ']' 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 141982 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 141982 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 141982' 00:07:16.184 killing process with pid 141982 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 141982 00:07:16.184 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 141982 00:07:16.443 [2024-12-13 05:23:16.239611] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:16.443 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:16.443 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:16.443 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:16.443 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:16.443 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:16.444 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:16.444 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:16.444 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.444 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:16.444 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.444 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.444 05:23:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.351 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:18.351 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:18.351 00:07:18.351 real 0m12.274s 00:07:18.351 user 0m19.269s 00:07:18.351 sys 0m5.486s 00:07:18.351 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.351 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.351 ************************************ 00:07:18.351 END TEST nvmf_host_management 00:07:18.351 ************************************ 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.611 ************************************ 00:07:18.611 START TEST nvmf_lvol 00:07:18.611 ************************************ 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:18.611 * Looking for test storage... 00:07:18.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:18.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.611 --rc genhtml_branch_coverage=1 00:07:18.611 --rc genhtml_function_coverage=1 00:07:18.611 --rc genhtml_legend=1 00:07:18.611 --rc geninfo_all_blocks=1 00:07:18.611 --rc geninfo_unexecuted_blocks=1 00:07:18.611 00:07:18.611 ' 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:18.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.611 --rc genhtml_branch_coverage=1 00:07:18.611 --rc genhtml_function_coverage=1 00:07:18.611 --rc genhtml_legend=1 00:07:18.611 --rc geninfo_all_blocks=1 00:07:18.611 --rc geninfo_unexecuted_blocks=1 00:07:18.611 00:07:18.611 ' 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:18.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.611 --rc genhtml_branch_coverage=1 00:07:18.611 --rc genhtml_function_coverage=1 00:07:18.611 --rc genhtml_legend=1 00:07:18.611 --rc geninfo_all_blocks=1 00:07:18.611 --rc geninfo_unexecuted_blocks=1 00:07:18.611 00:07:18.611 ' 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:18.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.611 --rc genhtml_branch_coverage=1 00:07:18.611 --rc genhtml_function_coverage=1 00:07:18.611 --rc genhtml_legend=1 00:07:18.611 --rc geninfo_all_blocks=1 00:07:18.611 --rc geninfo_unexecuted_blocks=1 00:07:18.611 00:07:18.611 ' 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.611 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.612 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.889 05:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:25.464 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:25.464 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:25.464 Found net devices under 0000:af:00.0: cvl_0_0 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:25.464 Found net devices under 0000:af:00.1: cvl_0_1 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:25.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:07:25.464 00:07:25.464 --- 10.0.0.2 ping statistics --- 00:07:25.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.464 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:07:25.464 00:07:25.464 --- 10.0.0.1 ping statistics --- 00:07:25.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.464 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.464 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=146190 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 146190 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 146190 ']' 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:25.465 [2024-12-13 05:23:24.646741] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:25.465 [2024-12-13 05:23:24.646786] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.465 [2024-12-13 05:23:24.724058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.465 [2024-12-13 05:23:24.746532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.465 [2024-12-13 05:23:24.746568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.465 [2024-12-13 05:23:24.746574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.465 [2024-12-13 05:23:24.746580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.465 [2024-12-13 05:23:24.746585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.465 [2024-12-13 05:23:24.747739] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.465 [2024-12-13 05:23:24.747848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.465 [2024-12-13 05:23:24.747850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.465 05:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:25.465 [2024-12-13 05:23:25.044302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.465 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:25.465 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:25.465 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:25.723 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:25.723 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:25.723 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:25.982 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=223bfeb6-2b79-43fc-810f-8608c866a868 00:07:25.982 05:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 223bfeb6-2b79-43fc-810f-8608c866a868 lvol 20 00:07:26.240 05:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2a0c40a4-725c-44b9-8c65-5415da403cb4 00:07:26.240 05:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:26.499 05:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2a0c40a4-725c-44b9-8c65-5415da403cb4 00:07:26.758 05:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:26.758 [2024-12-13 05:23:26.701525] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.758 05:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.016 05:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=146666 00:07:27.016 05:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:27.016 05:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:27.952 05:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2a0c40a4-725c-44b9-8c65-5415da403cb4 MY_SNAPSHOT 00:07:28.211 05:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e9515491-a699-4074-923a-9697f9b3ac36 00:07:28.211 05:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2a0c40a4-725c-44b9-8c65-5415da403cb4 30 00:07:28.470 05:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e9515491-a699-4074-923a-9697f9b3ac36 MY_CLONE 00:07:28.729 05:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6aad395c-d1b2-4ed1-8122-7150a78fdc87 00:07:28.729 05:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6aad395c-d1b2-4ed1-8122-7150a78fdc87 00:07:29.297 05:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 146666 00:07:37.416 Initializing NVMe Controllers 00:07:37.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:37.416 Controller IO queue size 128, less than required. 00:07:37.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:37.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:37.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:37.416 Initialization complete. Launching workers. 00:07:37.416 ======================================================== 00:07:37.416 Latency(us) 00:07:37.416 Device Information : IOPS MiB/s Average min max 00:07:37.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11855.00 46.31 10799.94 1273.17 110155.14 00:07:37.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11804.50 46.11 10843.11 3702.44 42784.65 00:07:37.416 ======================================================== 00:07:37.416 Total : 23659.50 92.42 10821.48 1273.17 110155.14 00:07:37.416 00:07:37.416 05:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:37.675 05:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2a0c40a4-725c-44b9-8c65-5415da403cb4 00:07:37.934 05:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 223bfeb6-2b79-43fc-810f-8608c866a868 00:07:38.194 05:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:38.194 05:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:38.194 05:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:38.194 05:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:38.194 05:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:38.194 05:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.194 05:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:38.194 05:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.194 05:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.194 rmmod nvme_tcp 00:07:38.194 rmmod nvme_fabrics 00:07:38.194 rmmod nvme_keyring 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 146190 ']' 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 146190 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 146190 ']' 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 146190 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146190 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146190' 00:07:38.194 killing process with pid 146190 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 146190 00:07:38.194 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 146190 00:07:38.454 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:38.454 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:38.454 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:38.454 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:38.454 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:38.454 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:38.454 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:38.454 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:38.454 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:38.454 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.454 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.454 05:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.363 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:40.363 00:07:40.363 real 0m21.947s 00:07:40.363 user 1m3.173s 00:07:40.363 sys 0m7.751s 00:07:40.363 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.363 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:40.363 ************************************ 00:07:40.363 END TEST nvmf_lvol 00:07:40.363 ************************************ 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.624 ************************************ 00:07:40.624 START TEST nvmf_lvs_grow 00:07:40.624 ************************************ 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:40.624 * Looking for test storage... 00:07:40.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.624 --rc genhtml_branch_coverage=1 00:07:40.624 --rc genhtml_function_coverage=1 00:07:40.624 --rc genhtml_legend=1 00:07:40.624 --rc geninfo_all_blocks=1 00:07:40.624 --rc geninfo_unexecuted_blocks=1 00:07:40.624 00:07:40.624 ' 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.624 --rc genhtml_branch_coverage=1 00:07:40.624 --rc genhtml_function_coverage=1 00:07:40.624 --rc genhtml_legend=1 00:07:40.624 --rc geninfo_all_blocks=1 00:07:40.624 --rc geninfo_unexecuted_blocks=1 00:07:40.624 00:07:40.624 ' 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.624 --rc genhtml_branch_coverage=1 00:07:40.624 --rc genhtml_function_coverage=1 00:07:40.624 --rc genhtml_legend=1 00:07:40.624 --rc geninfo_all_blocks=1 00:07:40.624 --rc geninfo_unexecuted_blocks=1 00:07:40.624 00:07:40.624 ' 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.624 --rc genhtml_branch_coverage=1 00:07:40.624 --rc genhtml_function_coverage=1 00:07:40.624 --rc genhtml_legend=1 00:07:40.624 --rc geninfo_all_blocks=1 00:07:40.624 --rc geninfo_unexecuted_blocks=1 00:07:40.624 00:07:40.624 ' 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.624 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:40.884 05:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:47.464 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:47.464 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:47.464 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:47.465 Found net devices under 0000:af:00.0: cvl_0_0 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:47.465 Found net devices under 0000:af:00.1: cvl_0_1 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:47.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:07:47.465 00:07:47.465 --- 10.0.0.2 ping statistics --- 00:07:47.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.465 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:07:47.465 00:07:47.465 --- 10.0.0.1 ping statistics --- 00:07:47.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.465 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=151947 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 151947 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 151947 ']' 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.465 [2024-12-13 05:23:46.672206] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:47.465 [2024-12-13 05:23:46.672251] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.465 [2024-12-13 05:23:46.749660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.465 [2024-12-13 05:23:46.770500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.465 [2024-12-13 05:23:46.770533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.465 [2024-12-13 05:23:46.770540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.465 [2024-12-13 05:23:46.770546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.465 [2024-12-13 05:23:46.770551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.465 [2024-12-13 05:23:46.771047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.465 05:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:47.465 [2024-12-13 05:23:47.077963] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.466 ************************************ 00:07:47.466 START TEST lvs_grow_clean 00:07:47.466 ************************************ 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:47.466 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:47.725 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e273429e-8ae2-47f2-9449-889224bc760c 00:07:47.725 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e273429e-8ae2-47f2-9449-889224bc760c 00:07:47.725 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:47.984 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:47.984 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:47.984 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e273429e-8ae2-47f2-9449-889224bc760c lvol 150 00:07:47.984 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=56d27f37-5e04-4c2c-95e5-a28a3e5e01d4 00:07:47.984 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.984 05:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:48.244 [2024-12-13 05:23:48.167456] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:48.244 [2024-12-13 05:23:48.167508] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:48.244 true 00:07:48.244 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e273429e-8ae2-47f2-9449-889224bc760c 00:07:48.244 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:48.503 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:48.503 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.762 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 56d27f37-5e04-4c2c-95e5-a28a3e5e01d4 00:07:48.762 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:49.022 [2024-12-13 05:23:48.913710] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.022 05:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.282 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:49.282 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=152437 00:07:49.282 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:49.282 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 152437 /var/tmp/bdevperf.sock 00:07:49.282 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 152437 ']' 00:07:49.282 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:49.282 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.282 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:49.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:49.282 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.282 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:49.282 [2024-12-13 05:23:49.128860] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:49.282 [2024-12-13 05:23:49.128907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152437 ] 00:07:49.282 [2024-12-13 05:23:49.202522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.282 [2024-12-13 05:23:49.225289] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.541 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.541 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:49.541 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:49.801 Nvme0n1 00:07:49.801 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:50.061 [ 00:07:50.061 { 00:07:50.061 "name": "Nvme0n1", 00:07:50.061 "aliases": [ 00:07:50.061 "56d27f37-5e04-4c2c-95e5-a28a3e5e01d4" 00:07:50.061 ], 00:07:50.061 "product_name": "NVMe disk", 00:07:50.061 "block_size": 4096, 00:07:50.061 "num_blocks": 38912, 00:07:50.061 "uuid": "56d27f37-5e04-4c2c-95e5-a28a3e5e01d4", 00:07:50.061 "numa_id": 1, 00:07:50.061 "assigned_rate_limits": { 00:07:50.061 "rw_ios_per_sec": 0, 00:07:50.061 "rw_mbytes_per_sec": 0, 00:07:50.061 "r_mbytes_per_sec": 0, 00:07:50.061 "w_mbytes_per_sec": 0 00:07:50.061 }, 00:07:50.061 "claimed": false, 00:07:50.061 "zoned": false, 00:07:50.061 "supported_io_types": { 00:07:50.061 "read": true, 00:07:50.061 "write": true, 00:07:50.061 "unmap": true, 00:07:50.061 "flush": true, 00:07:50.061 "reset": true, 00:07:50.061 "nvme_admin": true, 00:07:50.061 "nvme_io": true, 00:07:50.061 "nvme_io_md": false, 00:07:50.061 "write_zeroes": true, 00:07:50.061 "zcopy": false, 00:07:50.061 "get_zone_info": false, 00:07:50.061 "zone_management": false, 00:07:50.061 "zone_append": false, 00:07:50.061 "compare": true, 00:07:50.061 "compare_and_write": true, 00:07:50.061 "abort": true, 00:07:50.061 "seek_hole": false, 00:07:50.061 "seek_data": false, 00:07:50.061 "copy": true, 00:07:50.061 "nvme_iov_md": false 00:07:50.061 }, 00:07:50.061 "memory_domains": [ 00:07:50.061 { 00:07:50.061 "dma_device_id": "system", 00:07:50.061 "dma_device_type": 1 00:07:50.061 } 00:07:50.061 ], 00:07:50.061 "driver_specific": { 00:07:50.061 "nvme": [ 00:07:50.061 { 00:07:50.061 "trid": { 00:07:50.061 "trtype": "TCP", 00:07:50.061 "adrfam": "IPv4", 00:07:50.061 "traddr": "10.0.0.2", 00:07:50.061 "trsvcid": "4420", 00:07:50.061 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:50.061 }, 00:07:50.061 "ctrlr_data": { 00:07:50.061 "cntlid": 1, 00:07:50.061 "vendor_id": "0x8086", 00:07:50.061 "model_number": "SPDK bdev Controller", 00:07:50.061 "serial_number": "SPDK0", 00:07:50.061 "firmware_revision": "25.01", 00:07:50.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.061 "oacs": { 00:07:50.061 "security": 0, 00:07:50.061 "format": 0, 00:07:50.061 "firmware": 0, 00:07:50.061 "ns_manage": 0 00:07:50.061 }, 00:07:50.061 "multi_ctrlr": true, 00:07:50.061 "ana_reporting": false 00:07:50.061 }, 00:07:50.061 "vs": { 00:07:50.061 "nvme_version": "1.3" 00:07:50.061 }, 00:07:50.061 "ns_data": { 00:07:50.061 "id": 1, 00:07:50.061 "can_share": true 00:07:50.061 } 00:07:50.061 } 00:07:50.061 ], 00:07:50.061 "mp_policy": "active_passive" 00:07:50.061 } 00:07:50.061 } 00:07:50.061 ] 00:07:50.061 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=152548 00:07:50.062 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:50.062 05:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:50.062 Running I/O for 10 seconds... 00:07:51.000 Latency(us) 00:07:51.000 [2024-12-13T04:23:51.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.000 Nvme0n1 : 1.00 23615.00 92.25 0.00 0.00 0.00 0.00 0.00 00:07:51.000 [2024-12-13T04:23:51.015Z] =================================================================================================================== 00:07:51.000 [2024-12-13T04:23:51.015Z] Total : 23615.00 92.25 0.00 0.00 0.00 0.00 0.00 00:07:51.000 00:07:51.937 05:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e273429e-8ae2-47f2-9449-889224bc760c 00:07:52.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.197 Nvme0n1 : 2.00 23738.00 92.73 0.00 0.00 0.00 0.00 0.00 00:07:52.197 [2024-12-13T04:23:52.212Z] =================================================================================================================== 00:07:52.197 [2024-12-13T04:23:52.212Z] Total : 23738.00 92.73 0.00 0.00 0.00 0.00 0.00 00:07:52.197 00:07:52.197 true 00:07:52.197 05:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e273429e-8ae2-47f2-9449-889224bc760c 00:07:52.197 05:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:52.456 05:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:52.456 05:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:52.456 05:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 152548 00:07:53.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.024 Nvme0n1 : 3.00 23818.00 93.04 0.00 0.00 0.00 0.00 0.00 00:07:53.024 [2024-12-13T04:23:53.039Z] =================================================================================================================== 00:07:53.024 [2024-12-13T04:23:53.039Z] Total : 23818.00 93.04 0.00 0.00 0.00 0.00 0.00 00:07:53.024 00:07:53.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.961 Nvme0n1 : 4.00 23901.50 93.37 0.00 0.00 0.00 0.00 0.00 00:07:53.961 [2024-12-13T04:23:53.976Z] =================================================================================================================== 00:07:53.961 [2024-12-13T04:23:53.976Z] Total : 23901.50 93.37 0.00 0.00 0.00 0.00 0.00 00:07:53.961 00:07:55.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.340 Nvme0n1 : 5.00 23951.40 93.56 0.00 0.00 0.00 0.00 0.00 00:07:55.340 [2024-12-13T04:23:55.355Z] =================================================================================================================== 00:07:55.340 [2024-12-13T04:23:55.355Z] Total : 23951.40 93.56 0.00 0.00 0.00 0.00 0.00 00:07:55.340 00:07:56.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.277 Nvme0n1 : 6.00 23978.17 93.66 0.00 0.00 0.00 0.00 0.00 00:07:56.277 [2024-12-13T04:23:56.292Z] =================================================================================================================== 00:07:56.277 [2024-12-13T04:23:56.292Z] Total : 23978.17 93.66 0.00 0.00 0.00 0.00 0.00 00:07:56.277 00:07:57.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.215 Nvme0n1 : 7.00 24019.43 93.83 0.00 0.00 0.00 0.00 0.00 00:07:57.215 [2024-12-13T04:23:57.230Z] =================================================================================================================== 00:07:57.215 [2024-12-13T04:23:57.230Z] Total : 24019.43 93.83 0.00 0.00 0.00 0.00 0.00 00:07:57.215 00:07:58.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.152 Nvme0n1 : 8.00 24051.12 93.95 0.00 0.00 0.00 0.00 0.00 00:07:58.152 [2024-12-13T04:23:58.167Z] =================================================================================================================== 00:07:58.152 [2024-12-13T04:23:58.167Z] Total : 24051.12 93.95 0.00 0.00 0.00 0.00 0.00 00:07:58.152 00:07:59.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.091 Nvme0n1 : 9.00 24074.00 94.04 0.00 0.00 0.00 0.00 0.00 00:07:59.091 [2024-12-13T04:23:59.106Z] =================================================================================================================== 00:07:59.091 [2024-12-13T04:23:59.106Z] Total : 24074.00 94.04 0.00 0.00 0.00 0.00 0.00 00:07:59.091 00:08:00.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.029 Nvme0n1 : 10.00 24073.40 94.04 0.00 0.00 0.00 0.00 0.00 00:08:00.029 [2024-12-13T04:24:00.044Z] =================================================================================================================== 00:08:00.029 [2024-12-13T04:24:00.044Z] Total : 24073.40 94.04 0.00 0.00 0.00 0.00 0.00 00:08:00.029 00:08:00.029 00:08:00.029 Latency(us) 00:08:00.029 [2024-12-13T04:24:00.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.029 Nvme0n1 : 10.00 24076.26 94.05 0.00 0.00 5313.35 3105.16 10922.67 00:08:00.029 [2024-12-13T04:24:00.044Z] =================================================================================================================== 00:08:00.029 [2024-12-13T04:24:00.044Z] Total : 24076.26 94.05 0.00 0.00 5313.35 3105.16 10922.67 00:08:00.029 { 00:08:00.029 "results": [ 00:08:00.029 { 00:08:00.029 "job": "Nvme0n1", 00:08:00.029 "core_mask": "0x2", 00:08:00.029 "workload": "randwrite", 00:08:00.029 "status": "finished", 00:08:00.029 "queue_depth": 128, 00:08:00.029 "io_size": 4096, 00:08:00.029 "runtime": 10.00413, 00:08:00.029 "iops": 24076.256506062997, 00:08:00.029 "mibps": 94.04787697680858, 00:08:00.029 "io_failed": 0, 00:08:00.029 "io_timeout": 0, 00:08:00.029 "avg_latency_us": 5313.35345756175, 00:08:00.029 "min_latency_us": 3105.158095238095, 00:08:00.029 "max_latency_us": 10922.666666666666 00:08:00.029 } 00:08:00.029 ], 00:08:00.029 "core_count": 1 00:08:00.029 } 00:08:00.029 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 152437 00:08:00.029 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 152437 ']' 00:08:00.029 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 152437 00:08:00.029 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:00.029 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.029 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152437 00:08:00.289 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:00.289 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:00.289 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152437' 00:08:00.289 killing process with pid 152437 00:08:00.289 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 152437 00:08:00.289 Received shutdown signal, test time was about 10.000000 seconds 00:08:00.289 00:08:00.289 Latency(us) 00:08:00.289 [2024-12-13T04:24:00.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.289 [2024-12-13T04:24:00.304Z] =================================================================================================================== 00:08:00.289 [2024-12-13T04:24:00.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:00.289 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 152437 00:08:00.289 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.548 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:00.807 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e273429e-8ae2-47f2-9449-889224bc760c 00:08:00.807 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:01.066 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:01.066 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:01.066 05:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:01.066 [2024-12-13 05:24:01.027959] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:01.066 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e273429e-8ae2-47f2-9449-889224bc760c 00:08:01.066 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:01.066 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e273429e-8ae2-47f2-9449-889224bc760c 00:08:01.066 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.066 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.066 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.066 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.066 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.067 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.067 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.067 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:01.067 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e273429e-8ae2-47f2-9449-889224bc760c 00:08:01.326 request: 00:08:01.326 { 00:08:01.326 "uuid": "e273429e-8ae2-47f2-9449-889224bc760c", 00:08:01.326 "method": "bdev_lvol_get_lvstores", 00:08:01.326 "req_id": 1 00:08:01.326 } 00:08:01.326 Got JSON-RPC error response 00:08:01.326 response: 00:08:01.326 { 00:08:01.326 "code": -19, 00:08:01.326 "message": "No such device" 00:08:01.326 } 00:08:01.326 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:01.326 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:01.326 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:01.326 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:01.326 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.584 aio_bdev 00:08:01.584 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 56d27f37-5e04-4c2c-95e5-a28a3e5e01d4 00:08:01.584 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=56d27f37-5e04-4c2c-95e5-a28a3e5e01d4 00:08:01.584 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.584 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:01.584 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.584 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.584 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:01.844 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 56d27f37-5e04-4c2c-95e5-a28a3e5e01d4 -t 2000 00:08:01.844 [ 00:08:01.844 { 00:08:01.844 "name": "56d27f37-5e04-4c2c-95e5-a28a3e5e01d4", 00:08:01.844 "aliases": [ 00:08:01.844 "lvs/lvol" 00:08:01.844 ], 00:08:01.844 "product_name": "Logical Volume", 00:08:01.844 "block_size": 4096, 00:08:01.844 "num_blocks": 38912, 00:08:01.844 "uuid": "56d27f37-5e04-4c2c-95e5-a28a3e5e01d4", 00:08:01.844 "assigned_rate_limits": { 00:08:01.844 "rw_ios_per_sec": 0, 00:08:01.844 "rw_mbytes_per_sec": 0, 00:08:01.844 "r_mbytes_per_sec": 0, 00:08:01.844 "w_mbytes_per_sec": 0 00:08:01.844 }, 00:08:01.844 "claimed": false, 00:08:01.844 "zoned": false, 00:08:01.844 "supported_io_types": { 00:08:01.844 "read": true, 00:08:01.844 "write": true, 00:08:01.844 "unmap": true, 00:08:01.844 "flush": false, 00:08:01.844 "reset": true, 00:08:01.844 "nvme_admin": false, 00:08:01.844 "nvme_io": false, 00:08:01.844 "nvme_io_md": false, 00:08:01.844 "write_zeroes": true, 00:08:01.844 "zcopy": false, 00:08:01.844 "get_zone_info": false, 00:08:01.844 "zone_management": false, 00:08:01.844 "zone_append": false, 00:08:01.844 "compare": false, 00:08:01.844 "compare_and_write": false, 00:08:01.844 "abort": false, 00:08:01.844 "seek_hole": true, 00:08:01.844 "seek_data": true, 00:08:01.844 "copy": false, 00:08:01.844 "nvme_iov_md": false 00:08:01.844 }, 00:08:01.844 "driver_specific": { 00:08:01.844 "lvol": { 00:08:01.844 "lvol_store_uuid": "e273429e-8ae2-47f2-9449-889224bc760c", 00:08:01.844 "base_bdev": "aio_bdev", 00:08:01.844 "thin_provision": false, 00:08:01.844 "num_allocated_clusters": 38, 00:08:01.844 "snapshot": false, 00:08:01.844 "clone": false, 00:08:01.844 "esnap_clone": false 00:08:01.844 } 00:08:01.844 } 00:08:01.844 } 00:08:01.844 ] 00:08:01.844 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:01.844 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:01.844 05:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e273429e-8ae2-47f2-9449-889224bc760c 00:08:02.103 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:02.103 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e273429e-8ae2-47f2-9449-889224bc760c 00:08:02.103 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:02.362 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:02.362 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 56d27f37-5e04-4c2c-95e5-a28a3e5e01d4 00:08:02.362 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e273429e-8ae2-47f2-9449-889224bc760c 00:08:02.621 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.880 00:08:02.880 real 0m15.640s 00:08:02.880 user 0m15.248s 00:08:02.880 sys 0m1.473s 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:02.880 ************************************ 00:08:02.880 END TEST lvs_grow_clean 00:08:02.880 ************************************ 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:02.880 ************************************ 00:08:02.880 START TEST lvs_grow_dirty 00:08:02.880 ************************************ 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.880 05:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:03.139 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:03.139 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:03.398 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d313495b-b8ed-475d-9269-5b230b8764a6 00:08:03.398 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d313495b-b8ed-475d-9269-5b230b8764a6 00:08:03.398 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:03.657 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:03.657 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:03.657 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d313495b-b8ed-475d-9269-5b230b8764a6 lvol 150 00:08:03.657 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a099cb5e-2bd1-48bc-af4a-44bb46e710d1 00:08:03.657 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.657 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:03.916 [2024-12-13 05:24:03.840333] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:03.916 [2024-12-13 05:24:03.840381] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:03.916 true 00:08:03.916 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d313495b-b8ed-475d-9269-5b230b8764a6 00:08:03.916 05:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:04.175 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:04.175 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:04.434 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a099cb5e-2bd1-48bc-af4a-44bb46e710d1 00:08:04.434 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:04.692 [2024-12-13 05:24:04.570518] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.692 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.951 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=155141 00:08:04.951 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:04.951 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 155141 /var/tmp/bdevperf.sock 00:08:04.951 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 155141 ']' 00:08:04.951 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:04.951 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:04.951 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.951 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:04.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:04.951 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.951 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.951 [2024-12-13 05:24:04.807373] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:04.951 [2024-12-13 05:24:04.807420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155141 ] 00:08:04.951 [2024-12-13 05:24:04.883713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.951 [2024-12-13 05:24:04.906197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.214 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.214 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:05.214 05:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:05.473 Nvme0n1 00:08:05.473 05:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:05.473 [ 00:08:05.473 { 00:08:05.473 "name": "Nvme0n1", 00:08:05.473 "aliases": [ 00:08:05.473 "a099cb5e-2bd1-48bc-af4a-44bb46e710d1" 00:08:05.473 ], 00:08:05.473 "product_name": "NVMe disk", 00:08:05.473 "block_size": 4096, 00:08:05.473 "num_blocks": 38912, 00:08:05.473 "uuid": "a099cb5e-2bd1-48bc-af4a-44bb46e710d1", 00:08:05.473 "numa_id": 1, 00:08:05.473 "assigned_rate_limits": { 00:08:05.473 "rw_ios_per_sec": 0, 00:08:05.473 "rw_mbytes_per_sec": 0, 00:08:05.473 "r_mbytes_per_sec": 0, 00:08:05.473 "w_mbytes_per_sec": 0 00:08:05.473 }, 00:08:05.473 "claimed": false, 00:08:05.473 "zoned": false, 00:08:05.473 "supported_io_types": { 00:08:05.473 "read": true, 00:08:05.473 "write": true, 00:08:05.473 "unmap": true, 00:08:05.473 "flush": true, 00:08:05.473 "reset": true, 00:08:05.473 "nvme_admin": true, 00:08:05.473 "nvme_io": true, 00:08:05.473 "nvme_io_md": false, 00:08:05.473 "write_zeroes": true, 00:08:05.473 "zcopy": false, 00:08:05.473 "get_zone_info": false, 00:08:05.473 "zone_management": false, 00:08:05.473 "zone_append": false, 00:08:05.473 "compare": true, 00:08:05.473 "compare_and_write": true, 00:08:05.473 "abort": true, 00:08:05.473 "seek_hole": false, 00:08:05.473 "seek_data": false, 00:08:05.473 "copy": true, 00:08:05.473 "nvme_iov_md": false 00:08:05.473 }, 00:08:05.473 "memory_domains": [ 00:08:05.473 { 00:08:05.473 "dma_device_id": "system", 00:08:05.473 "dma_device_type": 1 00:08:05.473 } 00:08:05.473 ], 00:08:05.473 "driver_specific": { 00:08:05.473 "nvme": [ 00:08:05.473 { 00:08:05.473 "trid": { 00:08:05.474 "trtype": "TCP", 00:08:05.474 "adrfam": "IPv4", 00:08:05.474 "traddr": "10.0.0.2", 00:08:05.474 "trsvcid": "4420", 00:08:05.474 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:05.474 }, 00:08:05.474 "ctrlr_data": { 00:08:05.474 "cntlid": 1, 00:08:05.474 "vendor_id": "0x8086", 00:08:05.474 "model_number": "SPDK bdev Controller", 00:08:05.474 "serial_number": "SPDK0", 00:08:05.474 "firmware_revision": "25.01", 00:08:05.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:05.474 "oacs": { 00:08:05.474 "security": 0, 00:08:05.474 "format": 0, 00:08:05.474 "firmware": 0, 00:08:05.474 "ns_manage": 0 00:08:05.474 }, 00:08:05.474 "multi_ctrlr": true, 00:08:05.474 "ana_reporting": false 00:08:05.474 }, 00:08:05.474 "vs": { 00:08:05.474 "nvme_version": "1.3" 00:08:05.474 }, 00:08:05.474 "ns_data": { 00:08:05.474 "id": 1, 00:08:05.474 "can_share": true 00:08:05.474 } 00:08:05.474 } 00:08:05.474 ], 00:08:05.474 "mp_policy": "active_passive" 00:08:05.474 } 00:08:05.474 } 00:08:05.474 ] 00:08:05.474 05:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=155316 00:08:05.474 05:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:05.474 05:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:05.731 Running I/O for 10 seconds... 00:08:06.666 Latency(us) 00:08:06.666 [2024-12-13T04:24:06.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.666 Nvme0n1 : 1.00 23752.00 92.78 0.00 0.00 0.00 0.00 0.00 00:08:06.666 [2024-12-13T04:24:06.681Z] =================================================================================================================== 00:08:06.666 [2024-12-13T04:24:06.681Z] Total : 23752.00 92.78 0.00 0.00 0.00 0.00 0.00 00:08:06.666 00:08:07.601 05:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d313495b-b8ed-475d-9269-5b230b8764a6 00:08:07.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.601 Nvme0n1 : 2.00 23895.00 93.34 0.00 0.00 0.00 0.00 0.00 00:08:07.601 [2024-12-13T04:24:07.616Z] =================================================================================================================== 00:08:07.601 [2024-12-13T04:24:07.616Z] Total : 23895.00 93.34 0.00 0.00 0.00 0.00 0.00 00:08:07.601 00:08:07.859 true 00:08:07.859 05:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d313495b-b8ed-475d-9269-5b230b8764a6 00:08:07.859 05:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:07.859 05:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:07.859 05:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:07.859 05:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 155316 00:08:08.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.795 Nvme0n1 : 3.00 23920.00 93.44 0.00 0.00 0.00 0.00 0.00 00:08:08.795 [2024-12-13T04:24:08.810Z] =================================================================================================================== 00:08:08.795 [2024-12-13T04:24:08.810Z] Total : 23920.00 93.44 0.00 0.00 0.00 0.00 0.00 00:08:08.795 00:08:09.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.733 Nvme0n1 : 4.00 23976.50 93.66 0.00 0.00 0.00 0.00 0.00 00:08:09.733 [2024-12-13T04:24:09.748Z] =================================================================================================================== 00:08:09.733 [2024-12-13T04:24:09.748Z] Total : 23976.50 93.66 0.00 0.00 0.00 0.00 0.00 00:08:09.733 00:08:10.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.667 Nvme0n1 : 5.00 23894.00 93.34 0.00 0.00 0.00 0.00 0.00 00:08:10.667 [2024-12-13T04:24:10.682Z] =================================================================================================================== 00:08:10.667 [2024-12-13T04:24:10.682Z] Total : 23894.00 93.34 0.00 0.00 0.00 0.00 0.00 00:08:10.667 00:08:11.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.601 Nvme0n1 : 6.00 23937.00 93.50 0.00 0.00 0.00 0.00 0.00 00:08:11.601 [2024-12-13T04:24:11.616Z] =================================================================================================================== 00:08:11.601 [2024-12-13T04:24:11.616Z] Total : 23937.00 93.50 0.00 0.00 0.00 0.00 0.00 00:08:11.601 00:08:12.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.974 Nvme0n1 : 7.00 23983.57 93.69 0.00 0.00 0.00 0.00 0.00 00:08:12.974 [2024-12-13T04:24:12.990Z] =================================================================================================================== 00:08:12.975 [2024-12-13T04:24:12.990Z] Total : 23983.57 93.69 0.00 0.00 0.00 0.00 0.00 00:08:12.975 00:08:13.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.909 Nvme0n1 : 8.00 24026.75 93.85 0.00 0.00 0.00 0.00 0.00 00:08:13.909 [2024-12-13T04:24:13.924Z] =================================================================================================================== 00:08:13.909 [2024-12-13T04:24:13.924Z] Total : 24026.75 93.85 0.00 0.00 0.00 0.00 0.00 00:08:13.909 00:08:14.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.844 Nvme0n1 : 9.00 24041.78 93.91 0.00 0.00 0.00 0.00 0.00 00:08:14.844 [2024-12-13T04:24:14.859Z] =================================================================================================================== 00:08:14.844 [2024-12-13T04:24:14.859Z] Total : 24041.78 93.91 0.00 0.00 0.00 0.00 0.00 00:08:14.844 00:08:15.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.781 Nvme0n1 : 10.00 24061.70 93.99 0.00 0.00 0.00 0.00 0.00 00:08:15.781 [2024-12-13T04:24:15.796Z] =================================================================================================================== 00:08:15.781 [2024-12-13T04:24:15.796Z] Total : 24061.70 93.99 0.00 0.00 0.00 0.00 0.00 00:08:15.781 00:08:15.781 00:08:15.781 Latency(us) 00:08:15.781 [2024-12-13T04:24:15.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.781 Nvme0n1 : 10.00 24065.09 94.00 0.00 0.00 5315.94 3136.37 10673.01 00:08:15.781 [2024-12-13T04:24:15.796Z] =================================================================================================================== 00:08:15.781 [2024-12-13T04:24:15.796Z] Total : 24065.09 94.00 0.00 0.00 5315.94 3136.37 10673.01 00:08:15.781 { 00:08:15.781 "results": [ 00:08:15.781 { 00:08:15.781 "job": "Nvme0n1", 00:08:15.781 "core_mask": "0x2", 00:08:15.781 "workload": "randwrite", 00:08:15.781 "status": "finished", 00:08:15.781 "queue_depth": 128, 00:08:15.781 "io_size": 4096, 00:08:15.781 "runtime": 10.00391, 00:08:15.781 "iops": 24065.09054959511, 00:08:15.781 "mibps": 94.00425995935589, 00:08:15.781 "io_failed": 0, 00:08:15.781 "io_timeout": 0, 00:08:15.781 "avg_latency_us": 5315.943058122159, 00:08:15.781 "min_latency_us": 3136.365714285714, 00:08:15.781 "max_latency_us": 10673.005714285715 00:08:15.781 } 00:08:15.781 ], 00:08:15.781 "core_count": 1 00:08:15.781 } 00:08:15.781 05:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 155141 00:08:15.781 05:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 155141 ']' 00:08:15.781 05:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 155141 00:08:15.781 05:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:15.781 05:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.781 05:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 155141 00:08:15.781 05:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:15.781 05:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:15.781 05:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 155141' 00:08:15.781 killing process with pid 155141 00:08:15.781 05:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 155141 00:08:15.781 Received shutdown signal, test time was about 10.000000 seconds 00:08:15.781 00:08:15.781 Latency(us) 00:08:15.781 [2024-12-13T04:24:15.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.781 [2024-12-13T04:24:15.796Z] =================================================================================================================== 00:08:15.781 [2024-12-13T04:24:15.796Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:15.781 05:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 155141 00:08:16.040 05:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.040 05:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:16.299 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d313495b-b8ed-475d-9269-5b230b8764a6 00:08:16.299 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 151947 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 151947 00:08:16.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 151947 Killed "${NVMF_APP[@]}" "$@" 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=157514 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 157514 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 157514 ']' 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.559 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.559 [2024-12-13 05:24:16.481255] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:16.559 [2024-12-13 05:24:16.481302] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.559 [2024-12-13 05:24:16.556493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.818 [2024-12-13 05:24:16.578318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.818 [2024-12-13 05:24:16.578353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.818 [2024-12-13 05:24:16.578360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.818 [2024-12-13 05:24:16.578365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.818 [2024-12-13 05:24:16.578370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.818 [2024-12-13 05:24:16.578873] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.818 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.818 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:16.818 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.818 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.818 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.818 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.818 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.077 [2024-12-13 05:24:16.875917] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:17.077 [2024-12-13 05:24:16.876010] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:17.077 [2024-12-13 05:24:16.876035] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:17.077 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:17.077 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a099cb5e-2bd1-48bc-af4a-44bb46e710d1 00:08:17.077 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a099cb5e-2bd1-48bc-af4a-44bb46e710d1 00:08:17.077 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.077 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:17.077 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.077 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.077 05:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:17.336 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a099cb5e-2bd1-48bc-af4a-44bb46e710d1 -t 2000 00:08:17.336 [ 00:08:17.336 { 00:08:17.336 "name": "a099cb5e-2bd1-48bc-af4a-44bb46e710d1", 00:08:17.336 "aliases": [ 00:08:17.336 "lvs/lvol" 00:08:17.336 ], 00:08:17.336 "product_name": "Logical Volume", 00:08:17.336 "block_size": 4096, 00:08:17.336 "num_blocks": 38912, 00:08:17.336 "uuid": "a099cb5e-2bd1-48bc-af4a-44bb46e710d1", 00:08:17.336 "assigned_rate_limits": { 00:08:17.336 "rw_ios_per_sec": 0, 00:08:17.336 "rw_mbytes_per_sec": 0, 00:08:17.336 "r_mbytes_per_sec": 0, 00:08:17.336 "w_mbytes_per_sec": 0 00:08:17.336 }, 00:08:17.336 "claimed": false, 00:08:17.336 "zoned": false, 00:08:17.336 "supported_io_types": { 00:08:17.336 "read": true, 00:08:17.336 "write": true, 00:08:17.336 "unmap": true, 00:08:17.336 "flush": false, 00:08:17.336 "reset": true, 00:08:17.336 "nvme_admin": false, 00:08:17.336 "nvme_io": false, 00:08:17.336 "nvme_io_md": false, 00:08:17.336 "write_zeroes": true, 00:08:17.336 "zcopy": false, 00:08:17.336 "get_zone_info": false, 00:08:17.336 "zone_management": false, 00:08:17.336 "zone_append": false, 00:08:17.336 "compare": false, 00:08:17.336 "compare_and_write": false, 00:08:17.336 "abort": false, 00:08:17.336 "seek_hole": true, 00:08:17.336 "seek_data": true, 00:08:17.336 "copy": false, 00:08:17.336 "nvme_iov_md": false 00:08:17.336 }, 00:08:17.336 "driver_specific": { 00:08:17.336 "lvol": { 00:08:17.336 "lvol_store_uuid": "d313495b-b8ed-475d-9269-5b230b8764a6", 00:08:17.336 "base_bdev": "aio_bdev", 00:08:17.336 "thin_provision": false, 00:08:17.336 "num_allocated_clusters": 38, 00:08:17.336 "snapshot": false, 00:08:17.336 "clone": false, 00:08:17.336 "esnap_clone": false 00:08:17.336 } 00:08:17.336 } 00:08:17.336 } 00:08:17.336 ] 00:08:17.336 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:17.336 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d313495b-b8ed-475d-9269-5b230b8764a6 00:08:17.336 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:17.595 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:17.595 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d313495b-b8ed-475d-9269-5b230b8764a6 00:08:17.595 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:17.854 [2024-12-13 05:24:17.804919] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d313495b-b8ed-475d-9269-5b230b8764a6 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d313495b-b8ed-475d-9269-5b230b8764a6 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:17.854 05:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d313495b-b8ed-475d-9269-5b230b8764a6 00:08:18.113 request: 00:08:18.113 { 00:08:18.113 "uuid": "d313495b-b8ed-475d-9269-5b230b8764a6", 00:08:18.113 "method": "bdev_lvol_get_lvstores", 00:08:18.113 "req_id": 1 00:08:18.113 } 00:08:18.113 Got JSON-RPC error response 00:08:18.113 response: 00:08:18.113 { 00:08:18.113 "code": -19, 00:08:18.113 "message": "No such device" 00:08:18.113 } 00:08:18.113 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:18.113 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:18.113 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:18.113 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:18.113 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:18.371 aio_bdev 00:08:18.371 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a099cb5e-2bd1-48bc-af4a-44bb46e710d1 00:08:18.371 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a099cb5e-2bd1-48bc-af4a-44bb46e710d1 00:08:18.371 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.371 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:18.371 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.371 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.371 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:18.629 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a099cb5e-2bd1-48bc-af4a-44bb46e710d1 -t 2000 00:08:18.629 [ 00:08:18.629 { 00:08:18.629 "name": "a099cb5e-2bd1-48bc-af4a-44bb46e710d1", 00:08:18.629 "aliases": [ 00:08:18.629 "lvs/lvol" 00:08:18.629 ], 00:08:18.629 "product_name": "Logical Volume", 00:08:18.629 "block_size": 4096, 00:08:18.629 "num_blocks": 38912, 00:08:18.629 "uuid": "a099cb5e-2bd1-48bc-af4a-44bb46e710d1", 00:08:18.629 "assigned_rate_limits": { 00:08:18.629 "rw_ios_per_sec": 0, 00:08:18.629 "rw_mbytes_per_sec": 0, 00:08:18.629 "r_mbytes_per_sec": 0, 00:08:18.629 "w_mbytes_per_sec": 0 00:08:18.629 }, 00:08:18.629 "claimed": false, 00:08:18.629 "zoned": false, 00:08:18.629 "supported_io_types": { 00:08:18.629 "read": true, 00:08:18.629 "write": true, 00:08:18.629 "unmap": true, 00:08:18.629 "flush": false, 00:08:18.629 "reset": true, 00:08:18.629 "nvme_admin": false, 00:08:18.629 "nvme_io": false, 00:08:18.629 "nvme_io_md": false, 00:08:18.629 "write_zeroes": true, 00:08:18.629 "zcopy": false, 00:08:18.629 "get_zone_info": false, 00:08:18.629 "zone_management": false, 00:08:18.629 "zone_append": false, 00:08:18.629 "compare": false, 00:08:18.629 "compare_and_write": false, 00:08:18.629 "abort": false, 00:08:18.629 "seek_hole": true, 00:08:18.629 "seek_data": true, 00:08:18.629 "copy": false, 00:08:18.629 "nvme_iov_md": false 00:08:18.629 }, 00:08:18.629 "driver_specific": { 00:08:18.629 "lvol": { 00:08:18.629 "lvol_store_uuid": "d313495b-b8ed-475d-9269-5b230b8764a6", 00:08:18.629 "base_bdev": "aio_bdev", 00:08:18.629 "thin_provision": false, 00:08:18.629 "num_allocated_clusters": 38, 00:08:18.629 "snapshot": false, 00:08:18.629 "clone": false, 00:08:18.629 "esnap_clone": false 00:08:18.629 } 00:08:18.629 } 00:08:18.629 } 00:08:18.629 ] 00:08:18.629 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:18.629 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d313495b-b8ed-475d-9269-5b230b8764a6 00:08:18.629 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:18.888 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:18.888 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d313495b-b8ed-475d-9269-5b230b8764a6 00:08:18.888 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:19.145 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:19.145 05:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a099cb5e-2bd1-48bc-af4a-44bb46e710d1 00:08:19.404 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d313495b-b8ed-475d-9269-5b230b8764a6 00:08:19.404 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:19.662 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:19.662 00:08:19.662 real 0m16.757s 00:08:19.662 user 0m43.558s 00:08:19.662 sys 0m3.634s 00:08:19.662 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.662 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:19.662 ************************************ 00:08:19.662 END TEST lvs_grow_dirty 00:08:19.662 ************************************ 00:08:19.662 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:19.662 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:19.662 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:19.662 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:19.662 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:19.662 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:19.662 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:19.662 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:19.662 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:19.662 nvmf_trace.0 00:08:19.920 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:19.920 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:19.920 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:19.920 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:19.920 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:19.920 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:19.920 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.920 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:19.920 rmmod nvme_tcp 00:08:19.920 rmmod nvme_fabrics 00:08:19.920 rmmod nvme_keyring 00:08:19.920 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.920 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:19.920 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:19.920 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 157514 ']' 00:08:19.920 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 157514 00:08:19.921 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 157514 ']' 00:08:19.921 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 157514 00:08:19.921 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:19.921 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.921 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 157514 00:08:19.921 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.921 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.921 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 157514' 00:08:19.921 killing process with pid 157514 00:08:19.921 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 157514 00:08:19.921 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 157514 00:08:20.180 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.180 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:20.180 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:20.180 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:20.180 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:20.180 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:20.180 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:20.180 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.180 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.180 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.180 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.180 05:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.087 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:22.087 00:08:22.087 real 0m41.597s 00:08:22.087 user 1m4.426s 00:08:22.087 sys 0m9.947s 00:08:22.087 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.087 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.087 ************************************ 00:08:22.087 END TEST nvmf_lvs_grow 00:08:22.087 ************************************ 00:08:22.087 05:24:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:22.087 05:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:22.087 05:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.087 05:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 ************************************ 00:08:22.347 START TEST nvmf_bdev_io_wait 00:08:22.347 ************************************ 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:22.347 * Looking for test storage... 00:08:22.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.347 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.348 --rc genhtml_branch_coverage=1 00:08:22.348 --rc genhtml_function_coverage=1 00:08:22.348 --rc genhtml_legend=1 00:08:22.348 --rc geninfo_all_blocks=1 00:08:22.348 --rc geninfo_unexecuted_blocks=1 00:08:22.348 00:08:22.348 ' 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.348 --rc genhtml_branch_coverage=1 00:08:22.348 --rc genhtml_function_coverage=1 00:08:22.348 --rc genhtml_legend=1 00:08:22.348 --rc geninfo_all_blocks=1 00:08:22.348 --rc geninfo_unexecuted_blocks=1 00:08:22.348 00:08:22.348 ' 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.348 --rc genhtml_branch_coverage=1 00:08:22.348 --rc genhtml_function_coverage=1 00:08:22.348 --rc genhtml_legend=1 00:08:22.348 --rc geninfo_all_blocks=1 00:08:22.348 --rc geninfo_unexecuted_blocks=1 00:08:22.348 00:08:22.348 ' 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.348 --rc genhtml_branch_coverage=1 00:08:22.348 --rc genhtml_function_coverage=1 00:08:22.348 --rc genhtml_legend=1 00:08:22.348 --rc geninfo_all_blocks=1 00:08:22.348 --rc geninfo_unexecuted_blocks=1 00:08:22.348 00:08:22.348 ' 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:22.348 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:22.349 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:22.349 05:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:28.926 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:28.926 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:28.926 Found net devices under 0000:af:00.0: cvl_0_0 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:28.926 Found net devices under 0000:af:00.1: cvl_0_1 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.926 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:28.927 05:24:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:28.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:08:28.927 00:08:28.927 --- 10.0.0.2 ping statistics --- 00:08:28.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.927 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:08:28.927 00:08:28.927 --- 10.0.0.1 ping statistics --- 00:08:28.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.927 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=161652 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 161652 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 161652 ']' 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.927 [2024-12-13 05:24:28.201777] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:28.927 [2024-12-13 05:24:28.201822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.927 [2024-12-13 05:24:28.280912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.927 [2024-12-13 05:24:28.305044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.927 [2024-12-13 05:24:28.305081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.927 [2024-12-13 05:24:28.305087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.927 [2024-12-13 05:24:28.305093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.927 [2024-12-13 05:24:28.305098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.927 [2024-12-13 05:24:28.306531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.927 [2024-12-13 05:24:28.306641] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.927 [2024-12-13 05:24:28.306747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.927 [2024-12-13 05:24:28.306748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.927 [2024-12-13 05:24:28.458491] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.927 Malloc0 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.927 [2024-12-13 05:24:28.505617] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.927 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=161728 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=161730 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:28.928 { 00:08:28.928 "params": { 00:08:28.928 "name": "Nvme$subsystem", 00:08:28.928 "trtype": "$TEST_TRANSPORT", 00:08:28.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.928 "adrfam": "ipv4", 00:08:28.928 "trsvcid": "$NVMF_PORT", 00:08:28.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.928 "hdgst": ${hdgst:-false}, 00:08:28.928 "ddgst": ${ddgst:-false} 00:08:28.928 }, 00:08:28.928 "method": "bdev_nvme_attach_controller" 00:08:28.928 } 00:08:28.928 EOF 00:08:28.928 )") 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=161732 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:28.928 { 00:08:28.928 "params": { 00:08:28.928 "name": "Nvme$subsystem", 00:08:28.928 "trtype": "$TEST_TRANSPORT", 00:08:28.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.928 "adrfam": "ipv4", 00:08:28.928 "trsvcid": "$NVMF_PORT", 00:08:28.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.928 "hdgst": ${hdgst:-false}, 00:08:28.928 "ddgst": ${ddgst:-false} 00:08:28.928 }, 00:08:28.928 "method": "bdev_nvme_attach_controller" 00:08:28.928 } 00:08:28.928 EOF 00:08:28.928 )") 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=161735 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:28.928 { 00:08:28.928 "params": { 00:08:28.928 "name": "Nvme$subsystem", 00:08:28.928 "trtype": "$TEST_TRANSPORT", 00:08:28.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.928 "adrfam": "ipv4", 00:08:28.928 "trsvcid": "$NVMF_PORT", 00:08:28.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.928 "hdgst": ${hdgst:-false}, 00:08:28.928 "ddgst": ${ddgst:-false} 00:08:28.928 }, 00:08:28.928 "method": "bdev_nvme_attach_controller" 00:08:28.928 } 00:08:28.928 EOF 00:08:28.928 )") 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:28.928 { 00:08:28.928 "params": { 00:08:28.928 "name": "Nvme$subsystem", 00:08:28.928 "trtype": "$TEST_TRANSPORT", 00:08:28.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.928 "adrfam": "ipv4", 00:08:28.928 "trsvcid": "$NVMF_PORT", 00:08:28.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.928 "hdgst": ${hdgst:-false}, 00:08:28.928 "ddgst": ${ddgst:-false} 00:08:28.928 }, 00:08:28.928 "method": "bdev_nvme_attach_controller" 00:08:28.928 } 00:08:28.928 EOF 00:08:28.928 )") 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 161728 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:28.928 "params": { 00:08:28.928 "name": "Nvme1", 00:08:28.928 "trtype": "tcp", 00:08:28.928 "traddr": "10.0.0.2", 00:08:28.928 "adrfam": "ipv4", 00:08:28.928 "trsvcid": "4420", 00:08:28.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.928 "hdgst": false, 00:08:28.928 "ddgst": false 00:08:28.928 }, 00:08:28.928 "method": "bdev_nvme_attach_controller" 00:08:28.928 }' 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:28.928 "params": { 00:08:28.928 "name": "Nvme1", 00:08:28.928 "trtype": "tcp", 00:08:28.928 "traddr": "10.0.0.2", 00:08:28.928 "adrfam": "ipv4", 00:08:28.928 "trsvcid": "4420", 00:08:28.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.928 "hdgst": false, 00:08:28.928 "ddgst": false 00:08:28.928 }, 00:08:28.928 "method": "bdev_nvme_attach_controller" 00:08:28.928 }' 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:28.928 "params": { 00:08:28.928 "name": "Nvme1", 00:08:28.928 "trtype": "tcp", 00:08:28.928 "traddr": "10.0.0.2", 00:08:28.928 "adrfam": "ipv4", 00:08:28.928 "trsvcid": "4420", 00:08:28.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.928 "hdgst": false, 00:08:28.928 "ddgst": false 00:08:28.928 }, 00:08:28.928 "method": "bdev_nvme_attach_controller" 00:08:28.928 }' 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:28.928 05:24:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:28.928 "params": { 00:08:28.928 "name": "Nvme1", 00:08:28.928 "trtype": "tcp", 00:08:28.928 "traddr": "10.0.0.2", 00:08:28.928 "adrfam": "ipv4", 00:08:28.928 "trsvcid": "4420", 00:08:28.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.928 "hdgst": false, 00:08:28.928 "ddgst": false 00:08:28.928 }, 00:08:28.928 "method": "bdev_nvme_attach_controller" 00:08:28.928 }' 00:08:28.928 [2024-12-13 05:24:28.558141] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:28.928 [2024-12-13 05:24:28.558189] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:28.928 [2024-12-13 05:24:28.558725] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:28.928 [2024-12-13 05:24:28.558766] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:28.928 [2024-12-13 05:24:28.559041] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:28.928 [2024-12-13 05:24:28.559076] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:28.928 [2024-12-13 05:24:28.561863] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:28.928 [2024-12-13 05:24:28.561907] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:28.928 [2024-12-13 05:24:28.745451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.928 [2024-12-13 05:24:28.762561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:28.928 [2024-12-13 05:24:28.838881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.929 [2024-12-13 05:24:28.856313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:29.188 [2024-12-13 05:24:28.952143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.188 [2024-12-13 05:24:28.971913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:29.188 [2024-12-13 05:24:29.001656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.188 [2024-12-13 05:24:29.017899] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:29.188 Running I/O for 1 seconds... 00:08:29.188 Running I/O for 1 seconds... 00:08:29.188 Running I/O for 1 seconds... 00:08:29.448 Running I/O for 1 seconds... 00:08:30.387 11999.00 IOPS, 46.87 MiB/s 00:08:30.387 Latency(us) 00:08:30.387 [2024-12-13T04:24:30.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.387 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:30.387 Nvme1n1 : 1.01 12040.03 47.03 0.00 0.00 10590.89 6366.35 15166.90 00:08:30.387 [2024-12-13T04:24:30.402Z] =================================================================================================================== 00:08:30.387 [2024-12-13T04:24:30.402Z] Total : 12040.03 47.03 0.00 0.00 10590.89 6366.35 15166.90 00:08:30.387 240424.00 IOPS, 939.16 MiB/s 00:08:30.387 Latency(us) 00:08:30.387 [2024-12-13T04:24:30.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.387 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:30.387 Nvme1n1 : 1.00 240063.74 937.75 0.00 0.00 530.67 222.35 1490.16 00:08:30.387 [2024-12-13T04:24:30.402Z] =================================================================================================================== 00:08:30.387 [2024-12-13T04:24:30.402Z] Total : 240063.74 937.75 0.00 0.00 530.67 222.35 1490.16 00:08:30.387 10074.00 IOPS, 39.35 MiB/s 00:08:30.387 Latency(us) 00:08:30.387 [2024-12-13T04:24:30.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.387 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:30.387 Nvme1n1 : 1.01 10142.83 39.62 0.00 0.00 12576.44 5118.05 19848.05 00:08:30.387 [2024-12-13T04:24:30.402Z] =================================================================================================================== 00:08:30.387 [2024-12-13T04:24:30.402Z] Total : 10142.83 39.62 0.00 0.00 12576.44 5118.05 19848.05 00:08:30.387 11087.00 IOPS, 43.31 MiB/s 00:08:30.387 Latency(us) 00:08:30.387 [2024-12-13T04:24:30.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.387 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:30.387 Nvme1n1 : 1.01 11167.25 43.62 0.00 0.00 11432.81 3464.05 22594.32 00:08:30.387 [2024-12-13T04:24:30.402Z] =================================================================================================================== 00:08:30.387 [2024-12-13T04:24:30.402Z] Total : 11167.25 43.62 0.00 0.00 11432.81 3464.05 22594.32 00:08:30.387 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 161730 00:08:30.387 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 161732 00:08:30.387 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 161735 00:08:30.387 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.387 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.387 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.646 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.646 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:30.646 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:30.646 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:30.646 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:30.646 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:30.646 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:30.646 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:30.646 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:30.646 rmmod nvme_tcp 00:08:30.646 rmmod nvme_fabrics 00:08:30.646 rmmod nvme_keyring 00:08:30.646 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:30.646 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:30.646 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:30.646 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 161652 ']' 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 161652 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 161652 ']' 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 161652 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 161652 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 161652' 00:08:30.647 killing process with pid 161652 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 161652 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 161652 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:30.647 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:30.906 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:30.906 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:30.906 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.906 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.906 05:24:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.814 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:32.814 00:08:32.814 real 0m10.626s 00:08:32.814 user 0m16.046s 00:08:32.814 sys 0m6.102s 00:08:32.814 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.814 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.814 ************************************ 00:08:32.814 END TEST nvmf_bdev_io_wait 00:08:32.814 ************************************ 00:08:32.814 05:24:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:32.814 05:24:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:32.814 05:24:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.814 05:24:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.814 ************************************ 00:08:32.814 START TEST nvmf_queue_depth 00:08:32.814 ************************************ 00:08:32.814 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:33.088 * Looking for test storage... 00:08:33.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:33.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.088 --rc genhtml_branch_coverage=1 00:08:33.088 --rc genhtml_function_coverage=1 00:08:33.088 --rc genhtml_legend=1 00:08:33.088 --rc geninfo_all_blocks=1 00:08:33.088 --rc geninfo_unexecuted_blocks=1 00:08:33.088 00:08:33.088 ' 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:33.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.088 --rc genhtml_branch_coverage=1 00:08:33.088 --rc genhtml_function_coverage=1 00:08:33.088 --rc genhtml_legend=1 00:08:33.088 --rc geninfo_all_blocks=1 00:08:33.088 --rc geninfo_unexecuted_blocks=1 00:08:33.088 00:08:33.088 ' 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:33.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.088 --rc genhtml_branch_coverage=1 00:08:33.088 --rc genhtml_function_coverage=1 00:08:33.088 --rc genhtml_legend=1 00:08:33.088 --rc geninfo_all_blocks=1 00:08:33.088 --rc geninfo_unexecuted_blocks=1 00:08:33.088 00:08:33.088 ' 00:08:33.088 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:33.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.088 --rc genhtml_branch_coverage=1 00:08:33.089 --rc genhtml_function_coverage=1 00:08:33.089 --rc genhtml_legend=1 00:08:33.089 --rc geninfo_all_blocks=1 00:08:33.089 --rc geninfo_unexecuted_blocks=1 00:08:33.089 00:08:33.089 ' 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.089 05:24:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:33.089 05:24:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:39.670 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:39.670 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:39.670 Found net devices under 0000:af:00.0: cvl_0_0 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:39.670 Found net devices under 0000:af:00.1: cvl_0_1 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:39.670 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:08:39.671 00:08:39.671 --- 10.0.0.2 ping statistics --- 00:08:39.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.671 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:08:39.671 00:08:39.671 --- 10.0.0.1 ping statistics --- 00:08:39.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.671 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=165460 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 165460 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 165460 ']' 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.671 05:24:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.671 [2024-12-13 05:24:38.995894] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:39.671 [2024-12-13 05:24:38.995943] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.671 [2024-12-13 05:24:39.076022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.671 [2024-12-13 05:24:39.097773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.671 [2024-12-13 05:24:39.097807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.671 [2024-12-13 05:24:39.097814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.671 [2024-12-13 05:24:39.097820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.671 [2024-12-13 05:24:39.097824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.671 [2024-12-13 05:24:39.098315] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.671 [2024-12-13 05:24:39.240902] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.671 Malloc0 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.671 [2024-12-13 05:24:39.290831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=165663 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 165663 /var/tmp/bdevperf.sock 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 165663 ']' 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.671 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.671 [2024-12-13 05:24:39.342779] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:39.672 [2024-12-13 05:24:39.342819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165663 ] 00:08:39.672 [2024-12-13 05:24:39.420012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.672 [2024-12-13 05:24:39.442814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.672 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.672 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:39.672 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:39.672 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.672 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.672 NVMe0n1 00:08:39.672 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.672 05:24:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:39.931 Running I/O for 10 seconds... 00:08:41.808 12251.00 IOPS, 47.86 MiB/s [2024-12-13T04:24:42.761Z] 12368.50 IOPS, 48.31 MiB/s [2024-12-13T04:24:44.140Z] 12584.33 IOPS, 49.16 MiB/s [2024-12-13T04:24:45.077Z] 12541.75 IOPS, 48.99 MiB/s [2024-12-13T04:24:46.022Z] 12639.40 IOPS, 49.37 MiB/s [2024-12-13T04:24:46.959Z] 12619.00 IOPS, 49.29 MiB/s [2024-12-13T04:24:47.897Z] 12628.00 IOPS, 49.33 MiB/s [2024-12-13T04:24:48.835Z] 12650.00 IOPS, 49.41 MiB/s [2024-12-13T04:24:49.773Z] 12678.89 IOPS, 49.53 MiB/s [2024-12-13T04:24:50.033Z] 12678.30 IOPS, 49.52 MiB/s 00:08:50.018 Latency(us) 00:08:50.018 [2024-12-13T04:24:50.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.018 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:50.018 Verification LBA range: start 0x0 length 0x4000 00:08:50.018 NVMe0n1 : 10.05 12708.99 49.64 0.00 0.00 80324.58 18724.57 50181.85 00:08:50.018 [2024-12-13T04:24:50.033Z] =================================================================================================================== 00:08:50.018 [2024-12-13T04:24:50.033Z] Total : 12708.99 49.64 0.00 0.00 80324.58 18724.57 50181.85 00:08:50.018 { 00:08:50.018 "results": [ 00:08:50.018 { 00:08:50.018 "job": "NVMe0n1", 00:08:50.018 "core_mask": "0x1", 00:08:50.018 "workload": "verify", 00:08:50.018 "status": "finished", 00:08:50.018 "verify_range": { 00:08:50.018 "start": 0, 00:08:50.018 "length": 16384 00:08:50.018 }, 00:08:50.018 "queue_depth": 1024, 00:08:50.018 "io_size": 4096, 00:08:50.018 "runtime": 10.053045, 00:08:50.018 "iops": 12708.985188069882, 00:08:50.018 "mibps": 49.64447339089798, 00:08:50.018 "io_failed": 0, 00:08:50.018 "io_timeout": 0, 00:08:50.018 "avg_latency_us": 80324.57850256648, 00:08:50.018 "min_latency_us": 18724.571428571428, 00:08:50.018 "max_latency_us": 50181.85142857143 00:08:50.018 } 00:08:50.018 ], 00:08:50.018 "core_count": 1 00:08:50.018 } 00:08:50.018 05:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 165663 00:08:50.018 05:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 165663 ']' 00:08:50.018 05:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 165663 00:08:50.018 05:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:50.018 05:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.018 05:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165663 00:08:50.018 05:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.018 05:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.018 05:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165663' 00:08:50.018 killing process with pid 165663 00:08:50.018 05:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 165663 00:08:50.018 Received shutdown signal, test time was about 10.000000 seconds 00:08:50.018 00:08:50.018 Latency(us) 00:08:50.018 [2024-12-13T04:24:50.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.018 [2024-12-13T04:24:50.033Z] =================================================================================================================== 00:08:50.018 [2024-12-13T04:24:50.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:50.018 05:24:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 165663 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.278 rmmod nvme_tcp 00:08:50.278 rmmod nvme_fabrics 00:08:50.278 rmmod nvme_keyring 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 165460 ']' 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 165460 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 165460 ']' 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 165460 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165460 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165460' 00:08:50.278 killing process with pid 165460 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 165460 00:08:50.278 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 165460 00:08:50.538 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:50.538 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:50.538 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:50.538 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:50.538 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:50.538 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:50.538 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:50.538 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.538 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:50.538 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.538 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.538 05:24:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.446 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:52.446 00:08:52.446 real 0m19.604s 00:08:52.446 user 0m22.979s 00:08:52.446 sys 0m5.981s 00:08:52.446 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.446 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.446 ************************************ 00:08:52.446 END TEST nvmf_queue_depth 00:08:52.446 ************************************ 00:08:52.446 05:24:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:52.446 05:24:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.446 05:24:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.446 05:24:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.706 ************************************ 00:08:52.706 START TEST nvmf_target_multipath 00:08:52.706 ************************************ 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:52.706 * Looking for test storage... 00:08:52.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:52.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.706 --rc genhtml_branch_coverage=1 00:08:52.706 --rc genhtml_function_coverage=1 00:08:52.706 --rc genhtml_legend=1 00:08:52.706 --rc geninfo_all_blocks=1 00:08:52.706 --rc geninfo_unexecuted_blocks=1 00:08:52.706 00:08:52.706 ' 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:52.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.706 --rc genhtml_branch_coverage=1 00:08:52.706 --rc genhtml_function_coverage=1 00:08:52.706 --rc genhtml_legend=1 00:08:52.706 --rc geninfo_all_blocks=1 00:08:52.706 --rc geninfo_unexecuted_blocks=1 00:08:52.706 00:08:52.706 ' 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:52.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.706 --rc genhtml_branch_coverage=1 00:08:52.706 --rc genhtml_function_coverage=1 00:08:52.706 --rc genhtml_legend=1 00:08:52.706 --rc geninfo_all_blocks=1 00:08:52.706 --rc geninfo_unexecuted_blocks=1 00:08:52.706 00:08:52.706 ' 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:52.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.706 --rc genhtml_branch_coverage=1 00:08:52.706 --rc genhtml_function_coverage=1 00:08:52.706 --rc genhtml_legend=1 00:08:52.706 --rc geninfo_all_blocks=1 00:08:52.706 --rc geninfo_unexecuted_blocks=1 00:08:52.706 00:08:52.706 ' 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.706 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:52.707 05:24:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:59.286 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.286 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:59.286 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:59.287 Found net devices under 0000:af:00.0: cvl_0_0 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:59.287 Found net devices under 0000:af:00.1: cvl_0_1 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:08:59.287 00:08:59.287 --- 10.0.0.2 ping statistics --- 00:08:59.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.287 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:08:59.287 00:08:59.287 --- 10.0.0.1 ping statistics --- 00:08:59.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.287 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:59.287 only one NIC for nvmf test 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.287 rmmod nvme_tcp 00:08:59.287 rmmod nvme_fabrics 00:08:59.287 rmmod nvme_keyring 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.287 05:24:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.194 00:09:01.194 real 0m8.312s 00:09:01.194 user 0m1.870s 00:09:01.194 sys 0m4.429s 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.194 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:01.194 ************************************ 00:09:01.194 END TEST nvmf_target_multipath 00:09:01.194 ************************************ 00:09:01.195 05:25:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:01.195 05:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.195 05:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.195 05:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.195 ************************************ 00:09:01.195 START TEST nvmf_zcopy 00:09:01.195 ************************************ 00:09:01.195 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:01.195 * Looking for test storage... 00:09:01.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.195 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:01.195 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:01.195 05:25:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:01.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.195 --rc genhtml_branch_coverage=1 00:09:01.195 --rc genhtml_function_coverage=1 00:09:01.195 --rc genhtml_legend=1 00:09:01.195 --rc geninfo_all_blocks=1 00:09:01.195 --rc geninfo_unexecuted_blocks=1 00:09:01.195 00:09:01.195 ' 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:01.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.195 --rc genhtml_branch_coverage=1 00:09:01.195 --rc genhtml_function_coverage=1 00:09:01.195 --rc genhtml_legend=1 00:09:01.195 --rc geninfo_all_blocks=1 00:09:01.195 --rc geninfo_unexecuted_blocks=1 00:09:01.195 00:09:01.195 ' 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:01.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.195 --rc genhtml_branch_coverage=1 00:09:01.195 --rc genhtml_function_coverage=1 00:09:01.195 --rc genhtml_legend=1 00:09:01.195 --rc geninfo_all_blocks=1 00:09:01.195 --rc geninfo_unexecuted_blocks=1 00:09:01.195 00:09:01.195 ' 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:01.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.195 --rc genhtml_branch_coverage=1 00:09:01.195 --rc genhtml_function_coverage=1 00:09:01.195 --rc genhtml_legend=1 00:09:01.195 --rc geninfo_all_blocks=1 00:09:01.195 --rc geninfo_unexecuted_blocks=1 00:09:01.195 00:09:01.195 ' 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.195 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.196 05:25:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:07.772 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:07.772 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:07.772 Found net devices under 0000:af:00.0: cvl_0_0 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:07.772 Found net devices under 0000:af:00.1: cvl_0_1 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.772 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:07.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:09:07.773 00:09:07.773 --- 10.0.0.2 ping statistics --- 00:09:07.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.773 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:09:07.773 05:25:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:09:07.773 00:09:07.773 --- 10.0.0.1 ping statistics --- 00:09:07.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.773 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=174433 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 174433 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 174433 ']' 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.773 [2024-12-13 05:25:07.098980] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:07.773 [2024-12-13 05:25:07.099031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.773 [2024-12-13 05:25:07.176680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.773 [2024-12-13 05:25:07.197534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.773 [2024-12-13 05:25:07.197567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.773 [2024-12-13 05:25:07.197574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.773 [2024-12-13 05:25:07.197580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.773 [2024-12-13 05:25:07.197585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.773 [2024-12-13 05:25:07.198070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.773 [2024-12-13 05:25:07.327486] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.773 [2024-12-13 05:25:07.347664] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.773 malloc0 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:07.773 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:07.773 { 00:09:07.773 "params": { 00:09:07.773 "name": "Nvme$subsystem", 00:09:07.773 "trtype": "$TEST_TRANSPORT", 00:09:07.773 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.773 "adrfam": "ipv4", 00:09:07.773 "trsvcid": "$NVMF_PORT", 00:09:07.773 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.773 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.773 "hdgst": ${hdgst:-false}, 00:09:07.773 "ddgst": ${ddgst:-false} 00:09:07.773 }, 00:09:07.773 "method": "bdev_nvme_attach_controller" 00:09:07.773 } 00:09:07.773 EOF 00:09:07.773 )") 00:09:07.774 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:07.774 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:07.774 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:07.774 05:25:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:07.774 "params": { 00:09:07.774 "name": "Nvme1", 00:09:07.774 "trtype": "tcp", 00:09:07.774 "traddr": "10.0.0.2", 00:09:07.774 "adrfam": "ipv4", 00:09:07.774 "trsvcid": "4420", 00:09:07.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:07.774 "hdgst": false, 00:09:07.774 "ddgst": false 00:09:07.774 }, 00:09:07.774 "method": "bdev_nvme_attach_controller" 00:09:07.774 }' 00:09:07.774 [2024-12-13 05:25:07.431241] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:07.774 [2024-12-13 05:25:07.431281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174456 ] 00:09:07.774 [2024-12-13 05:25:07.503711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.774 [2024-12-13 05:25:07.525939] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.774 Running I/O for 10 seconds... 00:09:10.091 8746.00 IOPS, 68.33 MiB/s [2024-12-13T04:25:11.044Z] 8831.50 IOPS, 69.00 MiB/s [2024-12-13T04:25:11.981Z] 8798.67 IOPS, 68.74 MiB/s [2024-12-13T04:25:12.918Z] 8816.00 IOPS, 68.88 MiB/s [2024-12-13T04:25:13.855Z] 8825.40 IOPS, 68.95 MiB/s [2024-12-13T04:25:14.791Z] 8832.83 IOPS, 69.01 MiB/s [2024-12-13T04:25:16.168Z] 8836.43 IOPS, 69.03 MiB/s [2024-12-13T04:25:17.105Z] 8840.75 IOPS, 69.07 MiB/s [2024-12-13T04:25:18.044Z] 8842.89 IOPS, 69.09 MiB/s [2024-12-13T04:25:18.044Z] 8845.30 IOPS, 69.10 MiB/s 00:09:18.029 Latency(us) 00:09:18.029 [2024-12-13T04:25:18.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.029 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:18.029 Verification LBA range: start 0x0 length 0x1000 00:09:18.029 Nvme1n1 : 10.01 8849.52 69.14 0.00 0.00 14422.83 1895.86 22594.32 00:09:18.029 [2024-12-13T04:25:18.044Z] =================================================================================================================== 00:09:18.029 [2024-12-13T04:25:18.044Z] Total : 8849.52 69.14 0.00 0.00 14422.83 1895.86 22594.32 00:09:18.029 05:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=176237 00:09:18.029 05:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:18.029 05:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.029 05:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:18.029 05:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:18.029 05:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:18.029 05:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:18.029 05:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:18.029 05:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:18.029 { 00:09:18.029 "params": { 00:09:18.029 "name": "Nvme$subsystem", 00:09:18.029 "trtype": "$TEST_TRANSPORT", 00:09:18.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:18.029 "adrfam": "ipv4", 00:09:18.029 "trsvcid": "$NVMF_PORT", 00:09:18.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:18.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:18.029 "hdgst": ${hdgst:-false}, 00:09:18.029 "ddgst": ${ddgst:-false} 00:09:18.029 }, 00:09:18.029 "method": "bdev_nvme_attach_controller" 00:09:18.029 } 00:09:18.029 EOF 00:09:18.029 )") 00:09:18.029 05:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:18.029 [2024-12-13 05:25:17.959003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.029 [2024-12-13 05:25:17.959037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.029 05:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:18.029 05:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:18.029 05:25:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:18.029 "params": { 00:09:18.029 "name": "Nvme1", 00:09:18.029 "trtype": "tcp", 00:09:18.029 "traddr": "10.0.0.2", 00:09:18.029 "adrfam": "ipv4", 00:09:18.029 "trsvcid": "4420", 00:09:18.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:18.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:18.029 "hdgst": false, 00:09:18.029 "ddgst": false 00:09:18.029 }, 00:09:18.029 "method": "bdev_nvme_attach_controller" 00:09:18.029 }' 00:09:18.029 [2024-12-13 05:25:17.971001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.029 [2024-12-13 05:25:17.971013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.029 [2024-12-13 05:25:17.983033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.029 [2024-12-13 05:25:17.983044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.029 [2024-12-13 05:25:17.995062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.029 [2024-12-13 05:25:17.995072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.029 [2024-12-13 05:25:17.997000] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:18.029 [2024-12-13 05:25:17.997038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176237 ] 00:09:18.029 [2024-12-13 05:25:18.007095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.029 [2024-12-13 05:25:18.007105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.029 [2024-12-13 05:25:18.019129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.029 [2024-12-13 05:25:18.019138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.029 [2024-12-13 05:25:18.031160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.029 [2024-12-13 05:25:18.031169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.029 [2024-12-13 05:25:18.043191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.029 [2024-12-13 05:25:18.043200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.055238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.055247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.067254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.067263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.071725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.289 [2024-12-13 05:25:18.079291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.079302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.091322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.091335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.094276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.289 [2024-12-13 05:25:18.103359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.103371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.115396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.115415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.127421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.127435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.139462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.139481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.151498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.151509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.163522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.163533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.175566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.175585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.187589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.187602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.199622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.199635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.211651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.211659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.223684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.223693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.235717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.235725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.247753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.247766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.259785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.259798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.271815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.271823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.283848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.283857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.289 [2024-12-13 05:25:18.295898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.289 [2024-12-13 05:25:18.295908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.548 [2024-12-13 05:25:18.307918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.548 [2024-12-13 05:25:18.307930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.548 [2024-12-13 05:25:18.319949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.548 [2024-12-13 05:25:18.319957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.548 [2024-12-13 05:25:18.331982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.331992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.344017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.344030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.356047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.356055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.368079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.368087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.380113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.380122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.392145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.392155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.404184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.404201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 Running I/O for 5 seconds... 00:09:18.549 [2024-12-13 05:25:18.420295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.420318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.434009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.434032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.448163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.448180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.462341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.462359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.472626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.472644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.486951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.486971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.501282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.501300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.511523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.511544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.525625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.525643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.539154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.539172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.549 [2024-12-13 05:25:18.553068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.549 [2024-12-13 05:25:18.553086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.566827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.566846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.580284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.580302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.593962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.593980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.607588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.607606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.621021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.621039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.634786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.634807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.648499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.648517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.662287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.662304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.676117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.676134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.689927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.689945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.703332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.703350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.717069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.717087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.731026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.731046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.745007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.745024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.758803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.758821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.772681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.772703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.786784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.786803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.800263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.800282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.808 [2024-12-13 05:25:18.814155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.808 [2024-12-13 05:25:18.814174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:18.827936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:18.827955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:18.841599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:18.841617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:18.854893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:18.854910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:18.869660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:18.869678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:18.885443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:18.885466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:18.899352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:18.899371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:18.912940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:18.912959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:18.926495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:18.926517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:18.940045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:18.940063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:18.954147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:18.954166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:18.968166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:18.968186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:18.978956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:18.978974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:18.993063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:18.993083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:19.006327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:19.006345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:19.020557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:19.020575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:19.036060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:19.036087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:19.049768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:19.049787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:19.064029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:19.064047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.068 [2024-12-13 05:25:19.074974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.068 [2024-12-13 05:25:19.074992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.089405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.089424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.102911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.102929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.116839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.116857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.130588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.130607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.144419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.144438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.157834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.157853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.171736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.171754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.185443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.185467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.199098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.199116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.212571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.212589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.226277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.226295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.239694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.239712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.253432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.253455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.266713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.266731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.280631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.280649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.294195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.294217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.308079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.308097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.321848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.321866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.328 [2024-12-13 05:25:19.335758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.328 [2024-12-13 05:25:19.335775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.349699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.349717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.363338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.363356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.377136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.377153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.390911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.390929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.404350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.404368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 16955.00 IOPS, 132.46 MiB/s [2024-12-13T04:25:19.603Z] [2024-12-13 05:25:19.418048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.418067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.431476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.431494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.445306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.445324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.458992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.459009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.472545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.472563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.485778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.485799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.499535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.499553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.513372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.513390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.527380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.527397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.540724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.540742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.554350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.554368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.567931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.567951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.581494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.581512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.588 [2024-12-13 05:25:19.594995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.588 [2024-12-13 05:25:19.595013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.608949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.608967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.622526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.622544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.636198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.636216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.650011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.650028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.663832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.663850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.677545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.677563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.690864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.690881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.704586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.704604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.718441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.718464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.731968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.731986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.746036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.746054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.759736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.759754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.773433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.773458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.787159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.787178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.801135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.801153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.812355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.812372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.826868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.826886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.840639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.840657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 [2024-12-13 05:25:19.854092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-13 05:25:19.854110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 [2024-12-13 05:25:19.867767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-13 05:25:19.867784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 [2024-12-13 05:25:19.881686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-13 05:25:19.881704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 [2024-12-13 05:25:19.894950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-13 05:25:19.894967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 [2024-12-13 05:25:19.908978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-13 05:25:19.908996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 [2024-12-13 05:25:19.922423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-13 05:25:19.922440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 [2024-12-13 05:25:19.936676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-13 05:25:19.936707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 [2024-12-13 05:25:19.950396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-13 05:25:19.950414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 [2024-12-13 05:25:19.964542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-13 05:25:19.964568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 [2024-12-13 05:25:19.980464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-13 05:25:19.980484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 [2024-12-13 05:25:19.990335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-13 05:25:19.990353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-12-13 05:25:20.004278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-13 05:25:20.004296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-12-13 05:25:20.018073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-13 05:25:20.018091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-12-13 05:25:20.032811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-13 05:25:20.032833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-12-13 05:25:20.048037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-13 05:25:20.048059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-12-13 05:25:20.062366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-13 05:25:20.062384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-12-13 05:25:20.076956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-13 05:25:20.076974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-12-13 05:25:20.088375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-13 05:25:20.088393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-12-13 05:25:20.102337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-13 05:25:20.102354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 [2024-12-13 05:25:20.116317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-13 05:25:20.116335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.130277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.130296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.141441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.141464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.155325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.155344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.168882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.168901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.182675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.182695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.196368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.196386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.209924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.209943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.223640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.223659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.237337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.237356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.251053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.251072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.264676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.264695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.278406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.278424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.292363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.292381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.306085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.306103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.319606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.319629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.333473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.333492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.347382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.347400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.361114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.361133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 [2024-12-13 05:25:20.374971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-13 05:25:20.374989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.389273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.389291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.404729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.404747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 16981.50 IOPS, 132.67 MiB/s [2024-12-13T04:25:20.639Z] [2024-12-13 05:25:20.418906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.418924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.432744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.432761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.446559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.446577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.460335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.460355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.474321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.474341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.487721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.487739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.501571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.501590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.515053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.515071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.528683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.528701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.542596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.542613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.556551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.556569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.570055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.570072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.584032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.584054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.597674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.597692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.611508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.611526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.625509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.625527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.624 [2024-12-13 05:25:20.639363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.624 [2024-12-13 05:25:20.639381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.653338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.653356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.666894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.666912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.680764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.680782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.689889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.689906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.703985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.704003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.717817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.717835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.731633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.731651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.745674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.745692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.756052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.756069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.770493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.770510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.783935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.783953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.798064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.798081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.812124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.812141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.882 [2024-12-13 05:25:20.825400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.882 [2024-12-13 05:25:20.825418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-12-13 05:25:20.839307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-12-13 05:25:20.839329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-12-13 05:25:20.853172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-12-13 05:25:20.853190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-12-13 05:25:20.866930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-12-13 05:25:20.866948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-12-13 05:25:20.880558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-12-13 05:25:20.880576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.883 [2024-12-13 05:25:20.894094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.883 [2024-12-13 05:25:20.894112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:20.908223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:20.908241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:20.922079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:20.922097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:20.935599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:20.935617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:20.949501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:20.949518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:20.963220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:20.963238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:20.977107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:20.977125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:20.991116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:20.991134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:21.005227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:21.005245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:21.018689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:21.018716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:21.032270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:21.032287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:21.046119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:21.046137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:21.060143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:21.060162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:21.073647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:21.073666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:21.087406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:21.087424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:21.101370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:21.101391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:21.115075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:21.115094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:21.129039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:21.129057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:21.142460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:21.142494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.142 [2024-12-13 05:25:21.156510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.142 [2024-12-13 05:25:21.156528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.402 [2024-12-13 05:25:21.170357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.402 [2024-12-13 05:25:21.170375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.402 [2024-12-13 05:25:21.183751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.402 [2024-12-13 05:25:21.183769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.402 [2024-12-13 05:25:21.197579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.402 [2024-12-13 05:25:21.197597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.402 [2024-12-13 05:25:21.211241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.402 [2024-12-13 05:25:21.211259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.402 [2024-12-13 05:25:21.225206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.402 [2024-12-13 05:25:21.225223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.402 [2024-12-13 05:25:21.238847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.402 [2024-12-13 05:25:21.238864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.402 [2024-12-13 05:25:21.252501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.402 [2024-12-13 05:25:21.252519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.402 [2024-12-13 05:25:21.266666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.402 [2024-12-13 05:25:21.266683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.402 [2024-12-13 05:25:21.280275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-13 05:25:21.280292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.403 [2024-12-13 05:25:21.294108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-13 05:25:21.294125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.403 [2024-12-13 05:25:21.307845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-13 05:25:21.307863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.403 [2024-12-13 05:25:21.321827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-13 05:25:21.321844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.403 [2024-12-13 05:25:21.335671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-13 05:25:21.335689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.403 [2024-12-13 05:25:21.349280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-13 05:25:21.349297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.403 [2024-12-13 05:25:21.362973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-13 05:25:21.362990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.403 [2024-12-13 05:25:21.376792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-13 05:25:21.376810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.403 [2024-12-13 05:25:21.390678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-13 05:25:21.390695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.403 [2024-12-13 05:25:21.404205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-13 05:25:21.404223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.403 16991.00 IOPS, 132.74 MiB/s [2024-12-13T04:25:21.418Z] [2024-12-13 05:25:21.417483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-13 05:25:21.417502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.663 [2024-12-13 05:25:21.431081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.663 [2024-12-13 05:25:21.431099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.663 [2024-12-13 05:25:21.444949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.663 [2024-12-13 05:25:21.444969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.663 [2024-12-13 05:25:21.458510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.663 [2024-12-13 05:25:21.458528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.663 [2024-12-13 05:25:21.472072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.663 [2024-12-13 05:25:21.472090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.663 [2024-12-13 05:25:21.485787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.663 [2024-12-13 05:25:21.485805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.663 [2024-12-13 05:25:21.499305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.663 [2024-12-13 05:25:21.499323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.663 [2024-12-13 05:25:21.512952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.663 [2024-12-13 05:25:21.512970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.663 [2024-12-13 05:25:21.526898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.663 [2024-12-13 05:25:21.526917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.663 [2024-12-13 05:25:21.540915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.663 [2024-12-13 05:25:21.540933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.663 [2024-12-13 05:25:21.554355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.663 [2024-12-13 05:25:21.554374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.663 [2024-12-13 05:25:21.568201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-13 05:25:21.568220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 [2024-12-13 05:25:21.582053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-13 05:25:21.582071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 [2024-12-13 05:25:21.595652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-13 05:25:21.595671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 [2024-12-13 05:25:21.609671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-13 05:25:21.609695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 [2024-12-13 05:25:21.623084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-13 05:25:21.623102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 [2024-12-13 05:25:21.636978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-13 05:25:21.636997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 [2024-12-13 05:25:21.650708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-13 05:25:21.650726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 [2024-12-13 05:25:21.664708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-13 05:25:21.664726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 [2024-12-13 05:25:21.678225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-13 05:25:21.678243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.923 [2024-12-13 05:25:21.692515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.692535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.706424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.706443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.719839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.719859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.733289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.733309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.746785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.746804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.760244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.760263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.774082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.774100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.787885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.787903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.801430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.801455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.815184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.815202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.828610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.828629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.843094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.843114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.858329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.858347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.872421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.872444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.886413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.886432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.899996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.900017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.913576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.913594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 [2024-12-13 05:25:21.927683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-13 05:25:21.927701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.183 [2024-12-13 05:25:21.941381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.183 [2024-12-13 05:25:21.941399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.183 [2024-12-13 05:25:21.955280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.183 [2024-12-13 05:25:21.955298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.183 [2024-12-13 05:25:21.968821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.183 [2024-12-13 05:25:21.968839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:21.982901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:21.982919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:21.996991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:21.997008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.010175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.010193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.024145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.024164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.037801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.037819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.051556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.051574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.065320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.065338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.078852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.078871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.092483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.092504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.106193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.106211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.119736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.119756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.133663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.133685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.147178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.147195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.161192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.161210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.174908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.174926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.184 [2024-12-13 05:25:22.188960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.184 [2024-12-13 05:25:22.188978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.202518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.202536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.216748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.216766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.230532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.230551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.244039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.244056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.257714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.257731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.271079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.271096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.284870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.284888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.298359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.298376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.312131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.312149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.325810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.325828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.339617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.339635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.354061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.354078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.369490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.369507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.383655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.383672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.397346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.397369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.411236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.411255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 17012.00 IOPS, 132.91 MiB/s [2024-12-13T04:25:22.459Z] [2024-12-13 05:25:22.424992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.425010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.438630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.438648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.444 [2024-12-13 05:25:22.452559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.444 [2024-12-13 05:25:22.452576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.703 [2024-12-13 05:25:22.466191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.703 [2024-12-13 05:25:22.466208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.703 [2024-12-13 05:25:22.480132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.703 [2024-12-13 05:25:22.480150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.703 [2024-12-13 05:25:22.493935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.703 [2024-12-13 05:25:22.493953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.703 [2024-12-13 05:25:22.507999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.703 [2024-12-13 05:25:22.508017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.703 [2024-12-13 05:25:22.521833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.703 [2024-12-13 05:25:22.521850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.703 [2024-12-13 05:25:22.535514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.703 [2024-12-13 05:25:22.535532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.703 [2024-12-13 05:25:22.548847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.703 [2024-12-13 05:25:22.548866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.704 [2024-12-13 05:25:22.562413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.704 [2024-12-13 05:25:22.562431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.704 [2024-12-13 05:25:22.575916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.704 [2024-12-13 05:25:22.575935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.704 [2024-12-13 05:25:22.589410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.704 [2024-12-13 05:25:22.589428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.704 [2024-12-13 05:25:22.603166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.704 [2024-12-13 05:25:22.603183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.704 [2024-12-13 05:25:22.616776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.704 [2024-12-13 05:25:22.616794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.704 [2024-12-13 05:25:22.630614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.704 [2024-12-13 05:25:22.630632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.704 [2024-12-13 05:25:22.644272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.704 [2024-12-13 05:25:22.644289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.704 [2024-12-13 05:25:22.657930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.704 [2024-12-13 05:25:22.657948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.704 [2024-12-13 05:25:22.671531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.704 [2024-12-13 05:25:22.671548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.704 [2024-12-13 05:25:22.684740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.704 [2024-12-13 05:25:22.684758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.704 [2024-12-13 05:25:22.698410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.704 [2024-12-13 05:25:22.698427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.704 [2024-12-13 05:25:22.712197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.704 [2024-12-13 05:25:22.712214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.725760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.725777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.739729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.739747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.753492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.753510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.767344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.767361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.781156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.781173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.795148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.795165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.808648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.808665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.822334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.822351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.836193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.836211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.850268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.850285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.863907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.863924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.877538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.877556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.891254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.891273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.904735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.904753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.918315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.918334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.931994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.932013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.945559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.945577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.959024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.959042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.963 [2024-12-13 05:25:22.972837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.963 [2024-12-13 05:25:22.972855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:22.986382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:22.986400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:22.999982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.000002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.013380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.013399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.026914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.026933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.040841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.040860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.054202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.054221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.067622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.067639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.081095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.081113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.094265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.094284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.108469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.108488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.118937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.118956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.133554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.133571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.147198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.147216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.160890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.160907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.174871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.174889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.188309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.188327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.201867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.201885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.215759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.215778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.223 [2024-12-13 05:25:23.225201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.223 [2024-12-13 05:25:23.225219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.482 [2024-12-13 05:25:23.239221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.482 [2024-12-13 05:25:23.239242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.482 [2024-12-13 05:25:23.252795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.482 [2024-12-13 05:25:23.252814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.482 [2024-12-13 05:25:23.265762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.482 [2024-12-13 05:25:23.265779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.482 [2024-12-13 05:25:23.280001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.482 [2024-12-13 05:25:23.280018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.482 [2024-12-13 05:25:23.293840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.482 [2024-12-13 05:25:23.293858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.482 [2024-12-13 05:25:23.307247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.307265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 [2024-12-13 05:25:23.320858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.320876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 [2024-12-13 05:25:23.334464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.334482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 [2024-12-13 05:25:23.347872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.347890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 [2024-12-13 05:25:23.361625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.361642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 [2024-12-13 05:25:23.375041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.375059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 [2024-12-13 05:25:23.388837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.388854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 [2024-12-13 05:25:23.402246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.402263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 [2024-12-13 05:25:23.416009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.416032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 17052.40 IOPS, 133.22 MiB/s [2024-12-13T04:25:23.498Z] [2024-12-13 05:25:23.428210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.428228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 00:09:23.483 Latency(us) 00:09:23.483 [2024-12-13T04:25:23.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.483 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:23.483 Nvme1n1 : 5.01 17055.35 133.24 0.00 0.00 7497.18 3682.50 17101.78 00:09:23.483 [2024-12-13T04:25:23.498Z] =================================================================================================================== 00:09:23.483 [2024-12-13T04:25:23.498Z] Total : 17055.35 133.24 0.00 0.00 7497.18 3682.50 17101.78 00:09:23.483 [2024-12-13 05:25:23.438171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.438186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 [2024-12-13 05:25:23.450204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.450217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 [2024-12-13 05:25:23.462246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.462270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 [2024-12-13 05:25:23.474272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.474286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.483 [2024-12-13 05:25:23.486303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.483 [2024-12-13 05:25:23.486316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.742 [2024-12-13 05:25:23.498331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.742 [2024-12-13 05:25:23.498343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.742 [2024-12-13 05:25:23.510366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.742 [2024-12-13 05:25:23.510381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.742 [2024-12-13 05:25:23.522397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.742 [2024-12-13 05:25:23.522409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.742 [2024-12-13 05:25:23.534428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.742 [2024-12-13 05:25:23.534441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.742 [2024-12-13 05:25:23.546463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.742 [2024-12-13 05:25:23.546472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.742 [2024-12-13 05:25:23.558494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.742 [2024-12-13 05:25:23.558505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.742 [2024-12-13 05:25:23.570521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.742 [2024-12-13 05:25:23.570531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.742 [2024-12-13 05:25:23.582551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.742 [2024-12-13 05:25:23.582560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (176237) - No such process 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 176237 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.742 delay0 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.742 05:25:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:23.742 [2024-12-13 05:25:23.732384] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:30.311 Initializing NVMe Controllers 00:09:30.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:30.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:30.311 Initialization complete. Launching workers. 00:09:30.311 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1898 00:09:30.311 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2185, failed to submit 33 00:09:30.311 success 1999, unsuccessful 186, failed 0 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.311 rmmod nvme_tcp 00:09:30.311 rmmod nvme_fabrics 00:09:30.311 rmmod nvme_keyring 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 174433 ']' 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 174433 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 174433 ']' 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 174433 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174433 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174433' 00:09:30.311 killing process with pid 174433 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 174433 00:09:30.311 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 174433 00:09:30.571 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.571 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:30.571 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:30.571 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:30.571 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:30.571 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:30.571 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:30.571 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.571 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:30.571 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.571 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.571 05:25:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.478 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:32.478 00:09:32.478 real 0m31.541s 00:09:32.478 user 0m43.542s 00:09:32.478 sys 0m9.894s 00:09:32.478 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.478 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.478 ************************************ 00:09:32.478 END TEST nvmf_zcopy 00:09:32.478 ************************************ 00:09:32.478 05:25:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:32.478 05:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:32.478 05:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.478 05:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:32.478 ************************************ 00:09:32.478 START TEST nvmf_nmic 00:09:32.478 ************************************ 00:09:32.478 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:32.738 * Looking for test storage... 00:09:32.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:32.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.738 --rc genhtml_branch_coverage=1 00:09:32.738 --rc genhtml_function_coverage=1 00:09:32.738 --rc genhtml_legend=1 00:09:32.738 --rc geninfo_all_blocks=1 00:09:32.738 --rc geninfo_unexecuted_blocks=1 00:09:32.738 00:09:32.738 ' 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:32.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.738 --rc genhtml_branch_coverage=1 00:09:32.738 --rc genhtml_function_coverage=1 00:09:32.738 --rc genhtml_legend=1 00:09:32.738 --rc geninfo_all_blocks=1 00:09:32.738 --rc geninfo_unexecuted_blocks=1 00:09:32.738 00:09:32.738 ' 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:32.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.738 --rc genhtml_branch_coverage=1 00:09:32.738 --rc genhtml_function_coverage=1 00:09:32.738 --rc genhtml_legend=1 00:09:32.738 --rc geninfo_all_blocks=1 00:09:32.738 --rc geninfo_unexecuted_blocks=1 00:09:32.738 00:09:32.738 ' 00:09:32.738 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:32.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.739 --rc genhtml_branch_coverage=1 00:09:32.739 --rc genhtml_function_coverage=1 00:09:32.739 --rc genhtml_legend=1 00:09:32.739 --rc geninfo_all_blocks=1 00:09:32.739 --rc geninfo_unexecuted_blocks=1 00:09:32.739 00:09:32.739 ' 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:32.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:32.739 05:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:39.316 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:39.316 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:39.316 Found net devices under 0000:af:00.0: cvl_0_0 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:39.316 Found net devices under 0000:af:00.1: cvl_0_1 00:09:39.316 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:39.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:09:39.317 00:09:39.317 --- 10.0.0.2 ping statistics --- 00:09:39.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.317 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:09:39.317 00:09:39.317 --- 10.0.0.1 ping statistics --- 00:09:39.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.317 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=181722 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 181722 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 181722 ']' 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.317 [2024-12-13 05:25:38.679306] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:39.317 [2024-12-13 05:25:38.679351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.317 [2024-12-13 05:25:38.758628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.317 [2024-12-13 05:25:38.783150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.317 [2024-12-13 05:25:38.783187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.317 [2024-12-13 05:25:38.783196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.317 [2024-12-13 05:25:38.783201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.317 [2024-12-13 05:25:38.783207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.317 [2024-12-13 05:25:38.784669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.317 [2024-12-13 05:25:38.784775] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.317 [2024-12-13 05:25:38.784858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.317 [2024-12-13 05:25:38.784859] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.317 [2024-12-13 05:25:38.917629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.317 Malloc0 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.317 [2024-12-13 05:25:38.980736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:39.317 test case1: single bdev can't be used in multiple subsystems 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.317 05:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.318 [2024-12-13 05:25:39.008682] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:39.318 [2024-12-13 05:25:39.008701] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:39.318 [2024-12-13 05:25:39.008708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.318 request: 00:09:39.318 { 00:09:39.318 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:39.318 "namespace": { 00:09:39.318 "bdev_name": "Malloc0", 00:09:39.318 "no_auto_visible": false, 00:09:39.318 "hide_metadata": false 00:09:39.318 }, 00:09:39.318 "method": "nvmf_subsystem_add_ns", 00:09:39.318 "req_id": 1 00:09:39.318 } 00:09:39.318 Got JSON-RPC error response 00:09:39.318 response: 00:09:39.318 { 00:09:39.318 "code": -32602, 00:09:39.318 "message": "Invalid parameters" 00:09:39.318 } 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:39.318 Adding namespace failed - expected result. 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:39.318 test case2: host connect to nvmf target in multiple paths 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.318 [2024-12-13 05:25:39.020805] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.318 05:25:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:40.251 05:25:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:41.624 05:25:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:41.624 05:25:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:41.624 05:25:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.624 05:25:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:41.624 05:25:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:43.523 05:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:43.523 05:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:43.523 05:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.523 05:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:43.523 05:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.523 05:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:43.524 05:25:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:43.524 [global] 00:09:43.524 thread=1 00:09:43.524 invalidate=1 00:09:43.524 rw=write 00:09:43.524 time_based=1 00:09:43.524 runtime=1 00:09:43.524 ioengine=libaio 00:09:43.524 direct=1 00:09:43.524 bs=4096 00:09:43.524 iodepth=1 00:09:43.524 norandommap=0 00:09:43.524 numjobs=1 00:09:43.524 00:09:43.524 verify_dump=1 00:09:43.524 verify_backlog=512 00:09:43.524 verify_state_save=0 00:09:43.524 do_verify=1 00:09:43.524 verify=crc32c-intel 00:09:43.524 [job0] 00:09:43.524 filename=/dev/nvme0n1 00:09:43.524 Could not set queue depth (nvme0n1) 00:09:44.089 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.089 fio-3.35 00:09:44.089 Starting 1 thread 00:09:45.462 00:09:45.462 job0: (groupid=0, jobs=1): err= 0: pid=182776: Fri Dec 13 05:25:45 2024 00:09:45.462 read: IOPS=22, BW=89.8KiB/s (92.0kB/s)(92.0KiB/1024msec) 00:09:45.462 slat (nsec): min=9421, max=26254, avg=21451.61, stdev=2829.75 00:09:45.462 clat (usec): min=40788, max=41069, avg=40958.12, stdev=59.61 00:09:45.462 lat (usec): min=40797, max=41094, avg=40979.57, stdev=61.57 00:09:45.462 clat percentiles (usec): 00:09:45.462 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:45.462 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:45.462 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:45.462 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:45.462 | 99.99th=[41157] 00:09:45.462 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:09:45.462 slat (nsec): min=9874, max=49907, avg=11066.83, stdev=2341.76 00:09:45.462 clat (usec): min=119, max=304, avg=143.67, stdev=20.04 00:09:45.462 lat (usec): min=129, max=354, avg=154.73, stdev=21.00 00:09:45.462 clat percentiles (usec): 00:09:45.462 | 1.00th=[ 122], 5.00th=[ 124], 10.00th=[ 125], 20.00th=[ 127], 00:09:45.462 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 143], 00:09:45.462 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 176], 00:09:45.462 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 306], 99.95th=[ 306], 00:09:45.462 | 99.99th=[ 306] 00:09:45.462 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:45.462 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:45.462 lat (usec) : 250=95.51%, 500=0.19% 00:09:45.462 lat (msec) : 50=4.30% 00:09:45.462 cpu : usr=0.59%, sys=0.68%, ctx=535, majf=0, minf=1 00:09:45.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.462 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.462 00:09:45.462 Run status group 0 (all jobs): 00:09:45.462 READ: bw=89.8KiB/s (92.0kB/s), 89.8KiB/s-89.8KiB/s (92.0kB/s-92.0kB/s), io=92.0KiB (94.2kB), run=1024-1024msec 00:09:45.462 WRITE: bw=2000KiB/s (2048kB/s), 2000KiB/s-2000KiB/s (2048kB/s-2048kB/s), io=2048KiB (2097kB), run=1024-1024msec 00:09:45.462 00:09:45.462 Disk stats (read/write): 00:09:45.462 nvme0n1: ios=69/512, merge=0/0, ticks=1120/65, in_queue=1185, util=95.49% 00:09:45.462 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:45.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:45.462 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:45.462 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:45.462 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:45.462 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.462 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.463 rmmod nvme_tcp 00:09:45.463 rmmod nvme_fabrics 00:09:45.463 rmmod nvme_keyring 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 181722 ']' 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 181722 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 181722 ']' 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 181722 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 181722 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 181722' 00:09:45.463 killing process with pid 181722 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 181722 00:09:45.463 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 181722 00:09:45.722 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.722 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.722 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.722 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:45.722 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:45.722 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.722 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.722 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.722 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.722 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.722 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.722 05:25:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.630 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.630 00:09:47.630 real 0m15.136s 00:09:47.630 user 0m34.613s 00:09:47.630 sys 0m5.444s 00:09:47.630 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.630 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:47.630 ************************************ 00:09:47.630 END TEST nvmf_nmic 00:09:47.630 ************************************ 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.889 ************************************ 00:09:47.889 START TEST nvmf_fio_target 00:09:47.889 ************************************ 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:47.889 * Looking for test storage... 00:09:47.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:47.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.889 --rc genhtml_branch_coverage=1 00:09:47.889 --rc genhtml_function_coverage=1 00:09:47.889 --rc genhtml_legend=1 00:09:47.889 --rc geninfo_all_blocks=1 00:09:47.889 --rc geninfo_unexecuted_blocks=1 00:09:47.889 00:09:47.889 ' 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:47.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.889 --rc genhtml_branch_coverage=1 00:09:47.889 --rc genhtml_function_coverage=1 00:09:47.889 --rc genhtml_legend=1 00:09:47.889 --rc geninfo_all_blocks=1 00:09:47.889 --rc geninfo_unexecuted_blocks=1 00:09:47.889 00:09:47.889 ' 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:47.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.889 --rc genhtml_branch_coverage=1 00:09:47.889 --rc genhtml_function_coverage=1 00:09:47.889 --rc genhtml_legend=1 00:09:47.889 --rc geninfo_all_blocks=1 00:09:47.889 --rc geninfo_unexecuted_blocks=1 00:09:47.889 00:09:47.889 ' 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:47.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.889 --rc genhtml_branch_coverage=1 00:09:47.889 --rc genhtml_function_coverage=1 00:09:47.889 --rc genhtml_legend=1 00:09:47.889 --rc geninfo_all_blocks=1 00:09:47.889 --rc geninfo_unexecuted_blocks=1 00:09:47.889 00:09:47.889 ' 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.889 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.149 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.150 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:48.150 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:48.150 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:48.150 05:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:54.726 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:54.726 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:54.726 Found net devices under 0000:af:00.0: cvl_0_0 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.726 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:54.727 Found net devices under 0000:af:00.1: cvl_0_1 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:54.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:09:54.727 00:09:54.727 --- 10.0.0.2 ping statistics --- 00:09:54.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.727 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:09:54.727 00:09:54.727 --- 10.0.0.1 ping statistics --- 00:09:54.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.727 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=186472 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 186472 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 186472 ']' 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.727 05:25:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.727 [2024-12-13 05:25:53.929872] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:54.727 [2024-12-13 05:25:53.929919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.727 [2024-12-13 05:25:54.006615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.727 [2024-12-13 05:25:54.029857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.727 [2024-12-13 05:25:54.029893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.727 [2024-12-13 05:25:54.029900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.727 [2024-12-13 05:25:54.029905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.727 [2024-12-13 05:25:54.029911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.727 [2024-12-13 05:25:54.031358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.727 [2024-12-13 05:25:54.031483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.728 [2024-12-13 05:25:54.031537] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.728 [2024-12-13 05:25:54.031538] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.728 05:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.728 05:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:54.728 05:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.728 05:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.728 05:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.728 05:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.728 05:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:54.728 [2024-12-13 05:25:54.320885] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.728 05:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.728 05:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:54.728 05:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.987 05:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:54.987 05:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.248 05:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:55.248 05:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.248 05:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:55.248 05:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:55.508 05:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.767 05:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:55.767 05:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:56.026 05:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:56.026 05:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:56.285 05:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:56.285 05:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:56.285 05:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:56.544 05:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:56.544 05:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.803 05:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:56.803 05:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:57.061 05:25:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.061 [2024-12-13 05:25:56.986310] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.061 05:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:57.320 05:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:57.579 05:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:58.956 05:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:58.956 05:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:58.956 05:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:58.956 05:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:58.956 05:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:58.956 05:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:00.859 05:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:00.859 05:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:00.859 05:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:00.859 05:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:00.859 05:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:00.859 05:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:00.859 05:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:00.859 [global] 00:10:00.859 thread=1 00:10:00.859 invalidate=1 00:10:00.859 rw=write 00:10:00.859 time_based=1 00:10:00.859 runtime=1 00:10:00.859 ioengine=libaio 00:10:00.859 direct=1 00:10:00.859 bs=4096 00:10:00.859 iodepth=1 00:10:00.859 norandommap=0 00:10:00.859 numjobs=1 00:10:00.859 00:10:00.859 verify_dump=1 00:10:00.859 verify_backlog=512 00:10:00.859 verify_state_save=0 00:10:00.859 do_verify=1 00:10:00.859 verify=crc32c-intel 00:10:00.859 [job0] 00:10:00.859 filename=/dev/nvme0n1 00:10:00.859 [job1] 00:10:00.859 filename=/dev/nvme0n2 00:10:00.859 [job2] 00:10:00.859 filename=/dev/nvme0n3 00:10:00.859 [job3] 00:10:00.859 filename=/dev/nvme0n4 00:10:00.859 Could not set queue depth (nvme0n1) 00:10:00.859 Could not set queue depth (nvme0n2) 00:10:00.859 Could not set queue depth (nvme0n3) 00:10:00.859 Could not set queue depth (nvme0n4) 00:10:01.117 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.117 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.117 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.117 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.117 fio-3.35 00:10:01.117 Starting 4 threads 00:10:02.506 00:10:02.507 job0: (groupid=0, jobs=1): err= 0: pid=187794: Fri Dec 13 05:26:02 2024 00:10:02.507 read: IOPS=22, BW=88.8KiB/s (90.9kB/s)(92.0KiB/1036msec) 00:10:02.507 slat (nsec): min=9413, max=24097, avg=22571.61, stdev=2939.54 00:10:02.507 clat (usec): min=40859, max=42198, avg=41104.63, stdev=382.09 00:10:02.507 lat (usec): min=40882, max=42220, avg=41127.20, stdev=380.51 00:10:02.507 clat percentiles (usec): 00:10:02.507 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:02.507 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:02.507 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:02.507 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:02.507 | 99.99th=[42206] 00:10:02.507 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:10:02.507 slat (nsec): min=10010, max=51229, avg=11275.86, stdev=2350.87 00:10:02.507 clat (usec): min=120, max=320, avg=162.08, stdev=21.53 00:10:02.507 lat (usec): min=130, max=331, avg=173.36, stdev=22.31 00:10:02.507 clat percentiles (usec): 00:10:02.507 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 145], 00:10:02.507 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 167], 00:10:02.507 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:10:02.507 | 99.00th=[ 227], 99.50th=[ 277], 99.90th=[ 322], 99.95th=[ 322], 00:10:02.507 | 99.99th=[ 322] 00:10:02.507 bw ( KiB/s): min= 4096, max= 4096, per=26.05%, avg=4096.00, stdev= 0.00, samples=1 00:10:02.507 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:02.507 lat (usec) : 250=95.14%, 500=0.56% 00:10:02.507 lat (msec) : 50=4.30% 00:10:02.507 cpu : usr=0.48%, sys=0.39%, ctx=537, majf=0, minf=1 00:10:02.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.507 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.507 job1: (groupid=0, jobs=1): err= 0: pid=187795: Fri Dec 13 05:26:02 2024 00:10:02.507 read: IOPS=22, BW=88.7KiB/s (90.8kB/s)(92.0KiB/1037msec) 00:10:02.507 slat (nsec): min=10045, max=24693, avg=21426.04, stdev=2590.87 00:10:02.507 clat (usec): min=40883, max=41222, avg=40976.59, stdev=67.87 00:10:02.507 lat (usec): min=40908, max=41232, avg=40998.02, stdev=65.68 00:10:02.507 clat percentiles (usec): 00:10:02.507 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:02.507 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:02.507 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:02.507 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:02.507 | 99.99th=[41157] 00:10:02.507 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:10:02.507 slat (nsec): min=10599, max=39387, avg=13354.44, stdev=2341.74 00:10:02.507 clat (usec): min=134, max=278, avg=167.01, stdev=16.68 00:10:02.507 lat (usec): min=145, max=318, avg=180.37, stdev=17.53 00:10:02.507 clat percentiles (usec): 00:10:02.507 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 153], 00:10:02.507 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:10:02.507 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 194], 00:10:02.507 | 99.00th=[ 231], 99.50th=[ 235], 99.90th=[ 281], 99.95th=[ 281], 00:10:02.507 | 99.99th=[ 281] 00:10:02.507 bw ( KiB/s): min= 4096, max= 4096, per=26.05%, avg=4096.00, stdev= 0.00, samples=1 00:10:02.507 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:02.507 lat (usec) : 250=95.51%, 500=0.19% 00:10:02.507 lat (msec) : 50=4.30% 00:10:02.507 cpu : usr=0.48%, sys=0.87%, ctx=536, majf=0, minf=1 00:10:02.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.507 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.507 job2: (groupid=0, jobs=1): err= 0: pid=187796: Fri Dec 13 05:26:02 2024 00:10:02.507 read: IOPS=2516, BW=9.83MiB/s (10.3MB/s)(9.84MiB/1001msec) 00:10:02.507 slat (nsec): min=6671, max=21179, avg=7489.27, stdev=664.24 00:10:02.507 clat (usec): min=168, max=330, avg=222.07, stdev=25.64 00:10:02.507 lat (usec): min=175, max=338, avg=229.56, stdev=25.67 00:10:02.507 clat percentiles (usec): 00:10:02.507 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:10:02.507 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 235], 00:10:02.507 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 262], 00:10:02.507 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 314], 99.95th=[ 318], 00:10:02.507 | 99.99th=[ 330] 00:10:02.507 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:02.507 slat (nsec): min=9316, max=43836, avg=10820.12, stdev=1573.49 00:10:02.507 clat (usec): min=107, max=374, avg=149.43, stdev=28.71 00:10:02.507 lat (usec): min=120, max=385, avg=160.25, stdev=29.28 00:10:02.507 clat percentiles (usec): 00:10:02.507 | 1.00th=[ 117], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 128], 00:10:02.507 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 143], 00:10:02.507 | 70.00th=[ 163], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 200], 00:10:02.507 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 322], 00:10:02.507 | 99.99th=[ 375] 00:10:02.507 bw ( KiB/s): min=10776, max=10776, per=68.53%, avg=10776.00, stdev= 0.00, samples=1 00:10:02.507 iops : min= 2694, max= 2694, avg=2694.00, stdev= 0.00, samples=1 00:10:02.507 lat (usec) : 250=90.55%, 500=9.45% 00:10:02.507 cpu : usr=1.90%, sys=5.50%, ctx=5079, majf=0, minf=1 00:10:02.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.507 issued rwts: total=2519,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.507 job3: (groupid=0, jobs=1): err= 0: pid=187797: Fri Dec 13 05:26:02 2024 00:10:02.507 read: IOPS=142, BW=572KiB/s (586kB/s)(596KiB/1042msec) 00:10:02.507 slat (nsec): min=6829, max=25485, avg=9962.70, stdev=5483.41 00:10:02.507 clat (usec): min=192, max=42026, avg=6291.37, stdev=14581.52 00:10:02.507 lat (usec): min=202, max=42048, avg=6301.34, stdev=14586.48 00:10:02.507 clat percentiles (usec): 00:10:02.507 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 221], 20.00th=[ 233], 00:10:02.507 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:10:02.507 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[41157], 95.00th=[41157], 00:10:02.507 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:02.507 | 99.99th=[42206] 00:10:02.507 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:10:02.507 slat (nsec): min=10250, max=45069, avg=12416.97, stdev=4056.50 00:10:02.507 clat (usec): min=116, max=2560, avg=183.67, stdev=108.15 00:10:02.507 lat (usec): min=127, max=2570, avg=196.08, stdev=108.40 00:10:02.507 clat percentiles (usec): 00:10:02.507 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 149], 20.00th=[ 163], 00:10:02.507 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:10:02.507 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 215], 00:10:02.507 | 99.00th=[ 243], 99.50th=[ 318], 99.90th=[ 2573], 99.95th=[ 2573], 00:10:02.507 | 99.99th=[ 2573] 00:10:02.507 bw ( KiB/s): min= 4096, max= 4096, per=26.05%, avg=4096.00, stdev= 0.00, samples=1 00:10:02.507 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:02.507 lat (usec) : 250=90.17%, 500=6.35% 00:10:02.507 lat (msec) : 4=0.15%, 50=3.33% 00:10:02.507 cpu : usr=0.67%, sys=0.38%, ctx=661, majf=0, minf=1 00:10:02.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.507 issued rwts: total=149,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.507 00:10:02.507 Run status group 0 (all jobs): 00:10:02.507 READ: bw=10.2MiB/s (10.7MB/s), 88.7KiB/s-9.83MiB/s (90.8kB/s-10.3MB/s), io=10.6MiB (11.1MB), run=1001-1042msec 00:10:02.507 WRITE: bw=15.4MiB/s (16.1MB/s), 1965KiB/s-9.99MiB/s (2013kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1042msec 00:10:02.507 00:10:02.507 Disk stats (read/write): 00:10:02.507 nvme0n1: ios=44/512, merge=0/0, ticks=1726/84, in_queue=1810, util=98.00% 00:10:02.507 nvme0n2: ios=42/512, merge=0/0, ticks=1722/82, in_queue=1804, util=98.37% 00:10:02.507 nvme0n3: ios=2071/2218, merge=0/0, ticks=575/329, in_queue=904, util=91.02% 00:10:02.507 nvme0n4: ios=144/512, merge=0/0, ticks=730/88, in_queue=818, util=89.70% 00:10:02.507 05:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:02.507 [global] 00:10:02.507 thread=1 00:10:02.507 invalidate=1 00:10:02.507 rw=randwrite 00:10:02.507 time_based=1 00:10:02.507 runtime=1 00:10:02.507 ioengine=libaio 00:10:02.507 direct=1 00:10:02.507 bs=4096 00:10:02.507 iodepth=1 00:10:02.507 norandommap=0 00:10:02.507 numjobs=1 00:10:02.507 00:10:02.507 verify_dump=1 00:10:02.507 verify_backlog=512 00:10:02.507 verify_state_save=0 00:10:02.507 do_verify=1 00:10:02.507 verify=crc32c-intel 00:10:02.507 [job0] 00:10:02.507 filename=/dev/nvme0n1 00:10:02.507 [job1] 00:10:02.507 filename=/dev/nvme0n2 00:10:02.507 [job2] 00:10:02.507 filename=/dev/nvme0n3 00:10:02.507 [job3] 00:10:02.507 filename=/dev/nvme0n4 00:10:02.507 Could not set queue depth (nvme0n1) 00:10:02.507 Could not set queue depth (nvme0n2) 00:10:02.507 Could not set queue depth (nvme0n3) 00:10:02.507 Could not set queue depth (nvme0n4) 00:10:02.780 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.780 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.780 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.780 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.780 fio-3.35 00:10:02.780 Starting 4 threads 00:10:04.159 00:10:04.159 job0: (groupid=0, jobs=1): err= 0: pid=188166: Fri Dec 13 05:26:03 2024 00:10:04.159 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:04.159 slat (nsec): min=6431, max=33856, avg=7681.06, stdev=1373.14 00:10:04.159 clat (usec): min=168, max=40964, avg=275.63, stdev=1472.47 00:10:04.159 lat (usec): min=176, max=40986, avg=283.31, stdev=1472.88 00:10:04.159 clat percentiles (usec): 00:10:04.159 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:10:04.159 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:10:04.159 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 253], 95.00th=[ 277], 00:10:04.159 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[34341], 99.95th=[40633], 00:10:04.159 | 99.99th=[41157] 00:10:04.159 write: IOPS=2382, BW=9530KiB/s (9759kB/s)(9540KiB/1001msec); 0 zone resets 00:10:04.159 slat (nsec): min=9478, max=53617, avg=10917.61, stdev=1565.59 00:10:04.159 clat (usec): min=113, max=355, avg=160.79, stdev=28.11 00:10:04.159 lat (usec): min=123, max=390, avg=171.71, stdev=28.45 00:10:04.159 clat percentiles (usec): 00:10:04.159 | 1.00th=[ 121], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 137], 00:10:04.159 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 161], 00:10:04.159 | 70.00th=[ 176], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 204], 00:10:04.159 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 285], 99.95th=[ 338], 00:10:04.159 | 99.99th=[ 355] 00:10:04.159 bw ( KiB/s): min= 8192, max= 8192, per=34.52%, avg=8192.00, stdev= 0.00, samples=1 00:10:04.159 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:04.159 lat (usec) : 250=94.27%, 500=5.66% 00:10:04.159 lat (msec) : 50=0.07% 00:10:04.159 cpu : usr=2.80%, sys=3.80%, ctx=4435, majf=0, minf=1 00:10:04.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.159 issued rwts: total=2048,2385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.159 job1: (groupid=0, jobs=1): err= 0: pid=188167: Fri Dec 13 05:26:03 2024 00:10:04.159 read: IOPS=2424, BW=9698KiB/s (9931kB/s)(9708KiB/1001msec) 00:10:04.159 slat (nsec): min=6581, max=26587, avg=7314.43, stdev=688.13 00:10:04.159 clat (usec): min=160, max=514, avg=223.00, stdev=50.61 00:10:04.159 lat (usec): min=167, max=521, avg=230.31, stdev=50.60 00:10:04.159 clat percentiles (usec): 00:10:04.159 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:10:04.159 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 223], 00:10:04.159 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 289], 00:10:04.159 | 99.00th=[ 441], 99.50th=[ 490], 99.90th=[ 506], 99.95th=[ 510], 00:10:04.159 | 99.99th=[ 515] 00:10:04.159 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:04.159 slat (usec): min=9, max=12038, avg=15.07, stdev=237.74 00:10:04.159 clat (usec): min=105, max=431, avg=152.95, stdev=28.43 00:10:04.159 lat (usec): min=115, max=12460, avg=168.02, stdev=244.67 00:10:04.159 clat percentiles (usec): 00:10:04.159 | 1.00th=[ 118], 5.00th=[ 126], 10.00th=[ 129], 20.00th=[ 135], 00:10:04.159 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:10:04.159 | 70.00th=[ 157], 80.00th=[ 172], 90.00th=[ 190], 95.00th=[ 202], 00:10:04.159 | 99.00th=[ 255], 99.50th=[ 310], 99.90th=[ 408], 99.95th=[ 420], 00:10:04.159 | 99.99th=[ 433] 00:10:04.159 bw ( KiB/s): min=12288, max=12288, per=51.77%, avg=12288.00, stdev= 0.00, samples=1 00:10:04.159 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:04.159 lat (usec) : 250=89.99%, 500=9.89%, 750=0.12% 00:10:04.159 cpu : usr=2.30%, sys=4.80%, ctx=4989, majf=0, minf=1 00:10:04.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.159 issued rwts: total=2427,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.159 job2: (groupid=0, jobs=1): err= 0: pid=188168: Fri Dec 13 05:26:03 2024 00:10:04.159 read: IOPS=32, BW=131KiB/s (134kB/s)(132KiB/1006msec) 00:10:04.159 slat (nsec): min=7367, max=24992, avg=17717.33, stdev=6684.38 00:10:04.159 clat (usec): min=240, max=41976, avg=27558.73, stdev=19578.32 00:10:04.159 lat (usec): min=248, max=41998, avg=27576.45, stdev=19583.98 00:10:04.159 clat percentiles (usec): 00:10:04.159 | 1.00th=[ 241], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 293], 00:10:04.159 | 30.00th=[ 375], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:10:04.159 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:04.159 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:04.159 | 99.99th=[42206] 00:10:04.159 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:10:04.159 slat (nsec): min=9457, max=45354, avg=11840.76, stdev=2079.76 00:10:04.159 clat (usec): min=140, max=304, avg=171.60, stdev=14.20 00:10:04.159 lat (usec): min=151, max=349, avg=183.44, stdev=14.86 00:10:04.159 clat percentiles (usec): 00:10:04.159 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:10:04.159 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:10:04.159 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:10:04.159 | 99.00th=[ 210], 99.50th=[ 215], 99.90th=[ 306], 99.95th=[ 306], 00:10:04.159 | 99.99th=[ 306] 00:10:04.159 bw ( KiB/s): min= 4096, max= 4096, per=17.26%, avg=4096.00, stdev= 0.00, samples=1 00:10:04.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:04.159 lat (usec) : 250=93.76%, 500=2.20% 00:10:04.159 lat (msec) : 50=4.04% 00:10:04.159 cpu : usr=0.40%, sys=0.60%, ctx=546, majf=0, minf=1 00:10:04.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.159 issued rwts: total=33,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.159 job3: (groupid=0, jobs=1): err= 0: pid=188169: Fri Dec 13 05:26:03 2024 00:10:04.159 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:10:04.159 slat (nsec): min=10329, max=27844, avg=19668.45, stdev=4925.58 00:10:04.159 clat (usec): min=40774, max=41138, avg=40963.74, stdev=76.92 00:10:04.159 lat (usec): min=40784, max=41157, avg=40983.41, stdev=76.19 00:10:04.159 clat percentiles (usec): 00:10:04.159 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:04.159 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:04.159 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:04.159 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:04.159 | 99.99th=[41157] 00:10:04.159 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:10:04.159 slat (nsec): min=11121, max=52933, avg=13615.36, stdev=3278.65 00:10:04.159 clat (usec): min=140, max=458, avg=186.27, stdev=30.40 00:10:04.159 lat (usec): min=151, max=474, avg=199.88, stdev=31.09 00:10:04.159 clat percentiles (usec): 00:10:04.159 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 169], 00:10:04.159 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:10:04.159 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 225], 00:10:04.159 | 99.00th=[ 318], 99.50th=[ 400], 99.90th=[ 457], 99.95th=[ 457], 00:10:04.159 | 99.99th=[ 457] 00:10:04.159 bw ( KiB/s): min= 4096, max= 4096, per=17.26%, avg=4096.00, stdev= 0.00, samples=1 00:10:04.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:04.159 lat (usec) : 250=94.01%, 500=1.87% 00:10:04.159 lat (msec) : 50=4.12% 00:10:04.159 cpu : usr=0.30%, sys=1.19%, ctx=535, majf=0, minf=1 00:10:04.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.159 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.159 00:10:04.159 Run status group 0 (all jobs): 00:10:04.159 READ: bw=17.6MiB/s (18.4MB/s), 87.5KiB/s-9698KiB/s (89.6kB/s-9931kB/s), io=17.7MiB (18.6MB), run=1001-1006msec 00:10:04.159 WRITE: bw=23.2MiB/s (24.3MB/s), 2036KiB/s-9.99MiB/s (2085kB/s-10.5MB/s), io=23.3MiB (24.4MB), run=1001-1006msec 00:10:04.159 00:10:04.159 Disk stats (read/write): 00:10:04.159 nvme0n1: ios=1674/2048, merge=0/0, ticks=1337/322, in_queue=1659, util=86.07% 00:10:04.159 nvme0n2: ios=2096/2253, merge=0/0, ticks=698/352, in_queue=1050, util=91.06% 00:10:04.159 nvme0n3: ios=86/512, merge=0/0, ticks=819/86, in_queue=905, util=94.81% 00:10:04.159 nvme0n4: ios=44/512, merge=0/0, ticks=1642/86, in_queue=1728, util=94.46% 00:10:04.159 05:26:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:04.159 [global] 00:10:04.159 thread=1 00:10:04.159 invalidate=1 00:10:04.159 rw=write 00:10:04.159 time_based=1 00:10:04.159 runtime=1 00:10:04.159 ioengine=libaio 00:10:04.159 direct=1 00:10:04.159 bs=4096 00:10:04.159 iodepth=128 00:10:04.159 norandommap=0 00:10:04.159 numjobs=1 00:10:04.159 00:10:04.159 verify_dump=1 00:10:04.159 verify_backlog=512 00:10:04.159 verify_state_save=0 00:10:04.159 do_verify=1 00:10:04.159 verify=crc32c-intel 00:10:04.159 [job0] 00:10:04.159 filename=/dev/nvme0n1 00:10:04.159 [job1] 00:10:04.159 filename=/dev/nvme0n2 00:10:04.159 [job2] 00:10:04.159 filename=/dev/nvme0n3 00:10:04.159 [job3] 00:10:04.159 filename=/dev/nvme0n4 00:10:04.159 Could not set queue depth (nvme0n1) 00:10:04.159 Could not set queue depth (nvme0n2) 00:10:04.159 Could not set queue depth (nvme0n3) 00:10:04.159 Could not set queue depth (nvme0n4) 00:10:04.159 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.159 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.160 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.160 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.160 fio-3.35 00:10:04.160 Starting 4 threads 00:10:05.539 00:10:05.539 job0: (groupid=0, jobs=1): err= 0: pid=188549: Fri Dec 13 05:26:05 2024 00:10:05.539 read: IOPS=4430, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1008msec) 00:10:05.539 slat (nsec): min=1247, max=12561k, avg=106798.71, stdev=727608.83 00:10:05.539 clat (usec): min=3252, max=38908, avg=12879.48, stdev=4508.59 00:10:05.539 lat (usec): min=5379, max=38917, avg=12986.28, stdev=4566.86 00:10:05.539 clat percentiles (usec): 00:10:05.539 | 1.00th=[ 6915], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[10028], 00:10:05.539 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11469], 60.00th=[12649], 00:10:05.539 | 70.00th=[13435], 80.00th=[14877], 90.00th=[17433], 95.00th=[22414], 00:10:05.539 | 99.00th=[29492], 99.50th=[33817], 99.90th=[39060], 99.95th=[39060], 00:10:05.539 | 99.99th=[39060] 00:10:05.539 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:10:05.539 slat (usec): min=2, max=10155, avg=105.95, stdev=515.50 00:10:05.539 clat (usec): min=1448, max=42131, avg=15255.50, stdev=7447.56 00:10:05.539 lat (usec): min=1467, max=42138, avg=15361.45, stdev=7495.96 00:10:05.539 clat percentiles (usec): 00:10:05.539 | 1.00th=[ 4359], 5.00th=[ 6390], 10.00th=[ 8291], 20.00th=[ 9372], 00:10:05.539 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[12387], 60.00th=[15533], 00:10:05.539 | 70.00th=[20317], 80.00th=[21890], 90.00th=[24773], 95.00th=[28967], 00:10:05.539 | 99.00th=[37487], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:10:05.539 | 99.99th=[42206] 00:10:05.539 bw ( KiB/s): min=16520, max=20344, per=24.93%, avg=18432.00, stdev=2703.98, samples=2 00:10:05.539 iops : min= 4130, max= 5086, avg=4608.00, stdev=675.99, samples=2 00:10:05.539 lat (msec) : 2=0.02%, 4=0.23%, 10=26.74%, 20=53.98%, 50=19.03% 00:10:05.539 cpu : usr=3.08%, sys=5.26%, ctx=570, majf=0, minf=2 00:10:05.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:05.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.539 issued rwts: total=4466,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.539 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.539 job1: (groupid=0, jobs=1): err= 0: pid=188576: Fri Dec 13 05:26:05 2024 00:10:05.539 read: IOPS=6064, BW=23.7MiB/s (24.8MB/s)(23.8MiB/1004msec) 00:10:05.539 slat (nsec): min=1349, max=10040k, avg=88170.09, stdev=650979.63 00:10:05.539 clat (usec): min=2937, max=20656, avg=11135.78, stdev=2699.65 00:10:05.539 lat (usec): min=4024, max=25350, avg=11223.95, stdev=2751.87 00:10:05.539 clat percentiles (usec): 00:10:05.539 | 1.00th=[ 4555], 5.00th=[ 7570], 10.00th=[ 8717], 20.00th=[ 9503], 00:10:05.539 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10814], 00:10:05.539 | 70.00th=[11338], 80.00th=[12649], 90.00th=[15401], 95.00th=[17171], 00:10:05.539 | 99.00th=[18744], 99.50th=[19006], 99.90th=[20055], 99.95th=[20579], 00:10:05.539 | 99.99th=[20579] 00:10:05.539 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:10:05.539 slat (usec): min=2, max=8594, avg=69.34, stdev=421.14 00:10:05.539 clat (usec): min=1490, max=20654, avg=9672.54, stdev=1938.79 00:10:05.539 lat (usec): min=1504, max=20658, avg=9741.88, stdev=1981.90 00:10:05.539 clat percentiles (usec): 00:10:05.539 | 1.00th=[ 3589], 5.00th=[ 5473], 10.00th=[ 6915], 20.00th=[ 8717], 00:10:05.539 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:10:05.539 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10814], 95.00th=[11207], 00:10:05.539 | 99.00th=[15533], 99.50th=[16909], 99.90th=[19268], 99.95th=[19268], 00:10:05.540 | 99.99th=[20579] 00:10:05.540 bw ( KiB/s): min=24576, max=24576, per=33.24%, avg=24576.00, stdev= 0.00, samples=2 00:10:05.540 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:10:05.540 lat (msec) : 2=0.05%, 4=0.83%, 10=36.25%, 20=62.72%, 50=0.15% 00:10:05.540 cpu : usr=4.59%, sys=7.38%, ctx=638, majf=0, minf=1 00:10:05.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:05.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.540 issued rwts: total=6089,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.540 job2: (groupid=0, jobs=1): err= 0: pid=188595: Fri Dec 13 05:26:05 2024 00:10:05.540 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:10:05.540 slat (nsec): min=1223, max=15017k, avg=142846.33, stdev=977875.74 00:10:05.540 clat (usec): min=5476, max=55191, avg=18928.60, stdev=8845.62 00:10:05.540 lat (usec): min=5482, max=55199, avg=19071.45, stdev=8907.89 00:10:05.540 clat percentiles (usec): 00:10:05.540 | 1.00th=[ 5604], 5.00th=[10159], 10.00th=[11731], 20.00th=[13304], 00:10:05.540 | 30.00th=[15401], 40.00th=[16319], 50.00th=[17171], 60.00th=[18220], 00:10:05.540 | 70.00th=[19268], 80.00th=[20841], 90.00th=[30802], 95.00th=[38536], 00:10:05.540 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:10:05.540 | 99.99th=[55313] 00:10:05.540 write: IOPS=3246, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1008msec); 0 zone resets 00:10:05.540 slat (usec): min=2, max=21093, avg=152.71, stdev=1081.37 00:10:05.540 clat (usec): min=1524, max=67083, avg=21295.31, stdev=14024.22 00:10:05.540 lat (usec): min=1535, max=67114, avg=21448.03, stdev=14111.73 00:10:05.540 clat percentiles (usec): 00:10:05.540 | 1.00th=[ 3654], 5.00th=[ 8291], 10.00th=[ 9765], 20.00th=[10552], 00:10:05.540 | 30.00th=[11207], 40.00th=[11469], 50.00th=[13042], 60.00th=[20055], 00:10:05.540 | 70.00th=[26084], 80.00th=[35390], 90.00th=[44827], 95.00th=[50070], 00:10:05.540 | 99.00th=[58459], 99.50th=[60031], 99.90th=[60031], 99.95th=[60556], 00:10:05.540 | 99.99th=[66847] 00:10:05.540 bw ( KiB/s): min=10616, max=14536, per=17.01%, avg=12576.00, stdev=2771.86, samples=2 00:10:05.540 iops : min= 2654, max= 3634, avg=3144.00, stdev=692.96, samples=2 00:10:05.540 lat (msec) : 2=0.11%, 4=0.79%, 10=7.58%, 20=57.68%, 50=30.22% 00:10:05.540 lat (msec) : 100=3.63% 00:10:05.540 cpu : usr=2.18%, sys=4.47%, ctx=255, majf=0, minf=1 00:10:05.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:05.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.540 issued rwts: total=3072,3272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.540 job3: (groupid=0, jobs=1): err= 0: pid=188601: Fri Dec 13 05:26:05 2024 00:10:05.540 read: IOPS=4209, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1003msec) 00:10:05.540 slat (nsec): min=1358, max=30281k, avg=101195.80, stdev=748514.13 00:10:05.540 clat (usec): min=1471, max=57232, avg=13312.95, stdev=7359.63 00:10:05.540 lat (usec): min=4988, max=57826, avg=13414.15, stdev=7390.25 00:10:05.540 clat percentiles (usec): 00:10:05.540 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[10683], 00:10:05.540 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:10:05.540 | 70.00th=[11731], 80.00th=[13173], 90.00th=[16057], 95.00th=[30540], 00:10:05.540 | 99.00th=[46400], 99.50th=[54789], 99.90th=[56361], 99.95th=[56361], 00:10:05.540 | 99.99th=[57410] 00:10:05.540 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:05.540 slat (nsec): min=2000, max=23923k, avg=118138.00, stdev=825366.21 00:10:05.540 clat (usec): min=1592, max=81495, avg=15386.93, stdev=13226.92 00:10:05.540 lat (usec): min=1606, max=81501, avg=15505.07, stdev=13302.86 00:10:05.540 clat percentiles (usec): 00:10:05.540 | 1.00th=[ 4015], 5.00th=[ 7111], 10.00th=[ 8717], 20.00th=[10552], 00:10:05.540 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:10:05.540 | 70.00th=[11469], 80.00th=[13829], 90.00th=[31851], 95.00th=[43254], 00:10:05.540 | 99.00th=[79168], 99.50th=[80217], 99.90th=[81265], 99.95th=[81265], 00:10:05.540 | 99.99th=[81265] 00:10:05.540 bw ( KiB/s): min=16384, max=20464, per=24.92%, avg=18424.00, stdev=2885.00, samples=2 00:10:05.540 iops : min= 4096, max= 5116, avg=4606.00, stdev=721.25, samples=2 00:10:05.540 lat (msec) : 2=0.23%, 4=0.25%, 10=12.50%, 20=75.91%, 50=8.36% 00:10:05.540 lat (msec) : 100=2.75% 00:10:05.540 cpu : usr=2.69%, sys=5.69%, ctx=560, majf=0, minf=1 00:10:05.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:05.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.540 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.540 00:10:05.540 Run status group 0 (all jobs): 00:10:05.540 READ: bw=69.2MiB/s (72.5MB/s), 11.9MiB/s-23.7MiB/s (12.5MB/s-24.8MB/s), io=69.7MiB (73.1MB), run=1003-1008msec 00:10:05.540 WRITE: bw=72.2MiB/s (75.7MB/s), 12.7MiB/s-23.9MiB/s (13.3MB/s-25.1MB/s), io=72.8MiB (76.3MB), run=1003-1008msec 00:10:05.540 00:10:05.540 Disk stats (read/write): 00:10:05.540 nvme0n1: ios=3634/4055, merge=0/0, ticks=41615/55179, in_queue=96794, util=85.17% 00:10:05.540 nvme0n2: ios=5099/5120, merge=0/0, ticks=54174/48363, in_queue=102537, util=97.03% 00:10:05.540 nvme0n3: ios=2636/3072, merge=0/0, ticks=27592/42807, in_queue=70399, util=97.05% 00:10:05.540 nvme0n4: ios=3431/3584, merge=0/0, ticks=24148/29358, in_queue=53506, util=96.91% 00:10:05.540 05:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:05.540 [global] 00:10:05.540 thread=1 00:10:05.540 invalidate=1 00:10:05.540 rw=randwrite 00:10:05.540 time_based=1 00:10:05.540 runtime=1 00:10:05.540 ioengine=libaio 00:10:05.540 direct=1 00:10:05.540 bs=4096 00:10:05.540 iodepth=128 00:10:05.540 norandommap=0 00:10:05.540 numjobs=1 00:10:05.540 00:10:05.540 verify_dump=1 00:10:05.540 verify_backlog=512 00:10:05.540 verify_state_save=0 00:10:05.540 do_verify=1 00:10:05.540 verify=crc32c-intel 00:10:05.540 [job0] 00:10:05.540 filename=/dev/nvme0n1 00:10:05.540 [job1] 00:10:05.540 filename=/dev/nvme0n2 00:10:05.540 [job2] 00:10:05.540 filename=/dev/nvme0n3 00:10:05.540 [job3] 00:10:05.540 filename=/dev/nvme0n4 00:10:05.540 Could not set queue depth (nvme0n1) 00:10:05.540 Could not set queue depth (nvme0n2) 00:10:05.540 Could not set queue depth (nvme0n3) 00:10:05.540 Could not set queue depth (nvme0n4) 00:10:05.799 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.799 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.799 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.799 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.799 fio-3.35 00:10:05.799 Starting 4 threads 00:10:07.179 00:10:07.179 job0: (groupid=0, jobs=1): err= 0: pid=189028: Fri Dec 13 05:26:06 2024 00:10:07.179 read: IOPS=5267, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1003msec) 00:10:07.179 slat (nsec): min=1616, max=5243.1k, avg=80790.38, stdev=456512.19 00:10:07.179 clat (usec): min=1164, max=63721, avg=10471.47, stdev=3203.87 00:10:07.179 lat (usec): min=3759, max=66475, avg=10552.26, stdev=3225.56 00:10:07.179 clat percentiles (usec): 00:10:07.179 | 1.00th=[ 6325], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[ 9372], 00:10:07.179 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10421], 00:10:07.179 | 70.00th=[11076], 80.00th=[11469], 90.00th=[12256], 95.00th=[13435], 00:10:07.179 | 99.00th=[17433], 99.50th=[25560], 99.90th=[63701], 99.95th=[63701], 00:10:07.179 | 99.99th=[63701] 00:10:07.179 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:07.179 slat (usec): min=2, max=10023, avg=95.98, stdev=564.26 00:10:07.179 clat (usec): min=5803, max=72129, avg=12700.12, stdev=10608.83 00:10:07.179 lat (usec): min=5816, max=72143, avg=12796.10, stdev=10676.64 00:10:07.179 clat percentiles (usec): 00:10:07.179 | 1.00th=[ 6521], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[ 9634], 00:10:07.179 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10028], 60.00th=[10290], 00:10:07.179 | 70.00th=[11076], 80.00th=[11469], 90.00th=[13173], 95.00th=[16188], 00:10:07.179 | 99.00th=[67634], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], 00:10:07.179 | 99.99th=[71828] 00:10:07.179 bw ( KiB/s): min=19192, max=25864, per=29.47%, avg=22528.00, stdev=4717.82, samples=2 00:10:07.179 iops : min= 4798, max= 6466, avg=5632.00, stdev=1179.45, samples=2 00:10:07.179 lat (msec) : 2=0.01%, 4=0.31%, 10=46.86%, 20=49.97%, 50=0.81% 00:10:07.179 lat (msec) : 100=2.04% 00:10:07.179 cpu : usr=3.99%, sys=7.09%, ctx=590, majf=0, minf=1 00:10:07.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:07.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.179 issued rwts: total=5283,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.179 job1: (groupid=0, jobs=1): err= 0: pid=189041: Fri Dec 13 05:26:06 2024 00:10:07.179 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:10:07.179 slat (nsec): min=983, max=14650k, avg=109591.30, stdev=743405.32 00:10:07.179 clat (usec): min=1821, max=57044, avg=13639.16, stdev=6028.81 00:10:07.179 lat (usec): min=1828, max=57052, avg=13748.75, stdev=6088.40 00:10:07.179 clat percentiles (usec): 00:10:07.179 | 1.00th=[ 2704], 5.00th=[ 4424], 10.00th=[ 8717], 20.00th=[10028], 00:10:07.179 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12387], 60.00th=[13042], 00:10:07.179 | 70.00th=[14484], 80.00th=[16188], 90.00th=[20579], 95.00th=[25822], 00:10:07.179 | 99.00th=[36439], 99.50th=[40109], 99.90th=[53216], 99.95th=[53216], 00:10:07.179 | 99.99th=[56886] 00:10:07.179 write: IOPS=4678, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1007msec); 0 zone resets 00:10:07.179 slat (nsec): min=1763, max=11023k, avg=96046.09, stdev=572520.84 00:10:07.179 clat (usec): min=2390, max=43838, avg=13731.14, stdev=6186.06 00:10:07.179 lat (usec): min=2396, max=43846, avg=13827.19, stdev=6226.23 00:10:07.179 clat percentiles (usec): 00:10:07.179 | 1.00th=[ 5211], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9503], 00:10:07.179 | 30.00th=[10159], 40.00th=[11207], 50.00th=[11600], 60.00th=[12780], 00:10:07.179 | 70.00th=[14746], 80.00th=[19006], 90.00th=[21103], 95.00th=[23200], 00:10:07.179 | 99.00th=[39060], 99.50th=[40633], 99.90th=[43779], 99.95th=[43779], 00:10:07.179 | 99.99th=[43779] 00:10:07.179 bw ( KiB/s): min=12288, max=24576, per=24.11%, avg=18432.00, stdev=8688.93, samples=2 00:10:07.179 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:10:07.179 lat (msec) : 2=0.18%, 4=1.83%, 10=20.52%, 20=64.01%, 50=13.27% 00:10:07.179 lat (msec) : 100=0.18% 00:10:07.179 cpu : usr=3.08%, sys=4.17%, ctx=441, majf=0, minf=1 00:10:07.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:07.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.179 issued rwts: total=4608,4711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.179 job2: (groupid=0, jobs=1): err= 0: pid=189059: Fri Dec 13 05:26:06 2024 00:10:07.179 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:10:07.179 slat (nsec): min=1582, max=14905k, avg=129125.93, stdev=810619.87 00:10:07.179 clat (usec): min=6679, max=39359, avg=16737.51, stdev=5281.17 00:10:07.179 lat (usec): min=6686, max=39382, avg=16866.64, stdev=5345.97 00:10:07.179 clat percentiles (usec): 00:10:07.179 | 1.00th=[ 8291], 5.00th=[10421], 10.00th=[11076], 20.00th=[12256], 00:10:07.179 | 30.00th=[12911], 40.00th=[13566], 50.00th=[16057], 60.00th=[17695], 00:10:07.179 | 70.00th=[19268], 80.00th=[21365], 90.00th=[24249], 95.00th=[26608], 00:10:07.179 | 99.00th=[29754], 99.50th=[32637], 99.90th=[34341], 99.95th=[39060], 00:10:07.179 | 99.99th=[39584] 00:10:07.179 write: IOPS=3756, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1007msec); 0 zone resets 00:10:07.179 slat (usec): min=2, max=23178, avg=134.93, stdev=759.89 00:10:07.179 clat (usec): min=5449, max=49047, avg=17829.93, stdev=8186.27 00:10:07.179 lat (usec): min=6753, max=49057, avg=17964.86, stdev=8248.76 00:10:07.179 clat percentiles (usec): 00:10:07.179 | 1.00th=[ 8029], 5.00th=[10552], 10.00th=[11076], 20.00th=[11600], 00:10:07.179 | 30.00th=[12518], 40.00th=[13042], 50.00th=[14353], 60.00th=[17171], 00:10:07.179 | 70.00th=[20841], 80.00th=[21627], 90.00th=[30278], 95.00th=[36963], 00:10:07.179 | 99.00th=[44827], 99.50th=[47449], 99.90th=[49021], 99.95th=[49021], 00:10:07.179 | 99.99th=[49021] 00:10:07.179 bw ( KiB/s): min= 9832, max=19408, per=19.12%, avg=14620.00, stdev=6771.25, samples=2 00:10:07.179 iops : min= 2458, max= 4852, avg=3655.00, stdev=1692.81, samples=2 00:10:07.179 lat (msec) : 10=3.87%, 20=64.91%, 50=31.22% 00:10:07.179 cpu : usr=3.38%, sys=5.47%, ctx=348, majf=0, minf=1 00:10:07.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:07.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.179 issued rwts: total=3584,3783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.179 job3: (groupid=0, jobs=1): err= 0: pid=189065: Fri Dec 13 05:26:06 2024 00:10:07.179 read: IOPS=4818, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1004msec) 00:10:07.179 slat (nsec): min=1015, max=14561k, avg=102832.49, stdev=681044.69 00:10:07.180 clat (usec): min=1209, max=41393, avg=13074.41, stdev=4299.14 00:10:07.180 lat (usec): min=3973, max=41420, avg=13177.24, stdev=4340.66 00:10:07.180 clat percentiles (usec): 00:10:07.180 | 1.00th=[ 6521], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10945], 00:10:07.180 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12387], 00:10:07.180 | 70.00th=[13042], 80.00th=[14484], 90.00th=[17695], 95.00th=[23200], 00:10:07.180 | 99.00th=[28181], 99.50th=[28181], 99.90th=[29492], 99.95th=[32113], 00:10:07.180 | 99.99th=[41157] 00:10:07.180 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:10:07.180 slat (nsec): min=1924, max=5767.8k, avg=91836.69, stdev=451593.55 00:10:07.180 clat (usec): min=6466, max=33126, avg=12315.29, stdev=2932.23 00:10:07.180 lat (usec): min=6484, max=33137, avg=12407.13, stdev=2963.45 00:10:07.180 clat percentiles (usec): 00:10:07.180 | 1.00th=[ 7832], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11076], 00:10:07.180 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:10:07.180 | 70.00th=[12518], 80.00th=[13304], 90.00th=[13960], 95.00th=[16057], 00:10:07.180 | 99.00th=[30016], 99.50th=[31851], 99.90th=[32900], 99.95th=[33162], 00:10:07.180 | 99.99th=[33162] 00:10:07.180 bw ( KiB/s): min=17112, max=23848, per=26.79%, avg=20480.00, stdev=4763.07, samples=2 00:10:07.180 iops : min= 4278, max= 5962, avg=5120.00, stdev=1190.77, samples=2 00:10:07.180 lat (msec) : 2=0.01%, 4=0.02%, 10=8.68%, 20=85.90%, 50=5.39% 00:10:07.180 cpu : usr=4.59%, sys=6.08%, ctx=559, majf=0, minf=1 00:10:07.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:07.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.180 issued rwts: total=4838,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.180 00:10:07.180 Run status group 0 (all jobs): 00:10:07.180 READ: bw=71.0MiB/s (74.5MB/s), 13.9MiB/s-20.6MiB/s (14.6MB/s-21.6MB/s), io=71.5MiB (75.0MB), run=1003-1007msec 00:10:07.180 WRITE: bw=74.7MiB/s (78.3MB/s), 14.7MiB/s-21.9MiB/s (15.4MB/s-23.0MB/s), io=75.2MiB (78.8MB), run=1003-1007msec 00:10:07.180 00:10:07.180 Disk stats (read/write): 00:10:07.180 nvme0n1: ios=4398/4608, merge=0/0, ticks=22834/29507, in_queue=52341, util=98.00% 00:10:07.180 nvme0n2: ios=4111/4287, merge=0/0, ticks=30257/30686, in_queue=60943, util=87.01% 00:10:07.180 nvme0n3: ios=3072/3335, merge=0/0, ticks=24615/26957, in_queue=51572, util=88.96% 00:10:07.180 nvme0n4: ios=4146/4175, merge=0/0, ticks=26461/23605, in_queue=50066, util=90.77% 00:10:07.180 05:26:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:07.180 05:26:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=189169 00:10:07.180 05:26:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:07.180 05:26:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:07.180 [global] 00:10:07.180 thread=1 00:10:07.180 invalidate=1 00:10:07.180 rw=read 00:10:07.180 time_based=1 00:10:07.180 runtime=10 00:10:07.180 ioengine=libaio 00:10:07.180 direct=1 00:10:07.180 bs=4096 00:10:07.180 iodepth=1 00:10:07.180 norandommap=1 00:10:07.180 numjobs=1 00:10:07.180 00:10:07.180 [job0] 00:10:07.180 filename=/dev/nvme0n1 00:10:07.180 [job1] 00:10:07.180 filename=/dev/nvme0n2 00:10:07.180 [job2] 00:10:07.180 filename=/dev/nvme0n3 00:10:07.180 [job3] 00:10:07.180 filename=/dev/nvme0n4 00:10:07.180 Could not set queue depth (nvme0n1) 00:10:07.180 Could not set queue depth (nvme0n2) 00:10:07.180 Could not set queue depth (nvme0n3) 00:10:07.180 Could not set queue depth (nvme0n4) 00:10:07.438 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.438 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.438 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.438 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.438 fio-3.35 00:10:07.438 Starting 4 threads 00:10:09.973 05:26:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:10.232 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=34275328, buflen=4096 00:10:10.232 fio: pid=189489, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:10.232 05:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:10.491 05:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.491 05:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:10.491 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=299008, buflen=4096 00:10:10.491 fio: pid=189488, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:10.750 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=46047232, buflen=4096 00:10:10.750 fio: pid=189486, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:10.750 05:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.750 05:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:11.010 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52555776, buflen=4096 00:10:11.010 fio: pid=189487, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:11.010 05:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:11.010 05:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:11.010 00:10:11.010 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189486: Fri Dec 13 05:26:10 2024 00:10:11.010 read: IOPS=3657, BW=14.3MiB/s (15.0MB/s)(43.9MiB/3074msec) 00:10:11.010 slat (usec): min=4, max=14618, avg= 9.20, stdev=159.81 00:10:11.010 clat (usec): min=165, max=41910, avg=262.20, stdev=863.20 00:10:11.010 lat (usec): min=171, max=41933, avg=271.40, stdev=878.37 00:10:11.010 clat percentiles (usec): 00:10:11.010 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 217], 00:10:11.010 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 249], 00:10:11.010 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 289], 00:10:11.010 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 494], 99.95th=[ 1369], 00:10:11.010 | 99.99th=[41157] 00:10:11.010 bw ( KiB/s): min=10256, max=15832, per=37.21%, avg=14679.67, stdev=2172.34, samples=6 00:10:11.010 iops : min= 2564, max= 3958, avg=3669.83, stdev=543.05, samples=6 00:10:11.010 lat (usec) : 250=61.42%, 500=38.50%, 750=0.02% 00:10:11.010 lat (msec) : 2=0.02%, 50=0.04% 00:10:11.010 cpu : usr=0.91%, sys=3.35%, ctx=11246, majf=0, minf=1 00:10:11.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.010 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.010 issued rwts: total=11243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.010 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189487: Fri Dec 13 05:26:10 2024 00:10:11.010 read: IOPS=3892, BW=15.2MiB/s (15.9MB/s)(50.1MiB/3297msec) 00:10:11.010 slat (usec): min=3, max=15643, avg=10.29, stdev=253.02 00:10:11.010 clat (usec): min=163, max=9963, avg=244.27, stdev=100.89 00:10:11.010 lat (usec): min=167, max=16013, avg=254.56, stdev=275.35 00:10:11.010 clat percentiles (usec): 00:10:11.010 | 1.00th=[ 184], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 219], 00:10:11.010 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:10:11.010 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:10:11.010 | 99.00th=[ 314], 99.50th=[ 429], 99.90th=[ 523], 99.95th=[ 734], 00:10:11.010 | 99.99th=[ 4113] 00:10:11.010 bw ( KiB/s): min=14984, max=18224, per=40.39%, avg=15933.00, stdev=1164.39, samples=6 00:10:11.010 iops : min= 3746, max= 4556, avg=3983.17, stdev=291.14, samples=6 00:10:11.010 lat (usec) : 250=58.74%, 500=41.05%, 750=0.17%, 1000=0.01% 00:10:11.010 lat (msec) : 2=0.01%, 4=0.01%, 10=0.02% 00:10:11.010 cpu : usr=0.39%, sys=2.40%, ctx=12844, majf=0, minf=2 00:10:11.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.010 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.010 issued rwts: total=12832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.010 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189488: Fri Dec 13 05:26:10 2024 00:10:11.010 read: IOPS=25, BW=100KiB/s (103kB/s)(292KiB/2916msec) 00:10:11.010 slat (nsec): min=9878, max=37056, avg=23516.53, stdev=4049.50 00:10:11.010 clat (usec): min=350, max=45043, avg=39623.90, stdev=8198.43 00:10:11.010 lat (usec): min=387, max=45058, avg=39647.44, stdev=8195.75 00:10:11.010 clat percentiles (usec): 00:10:11.010 | 1.00th=[ 351], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:11.010 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:11.010 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:11.010 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:10:11.010 | 99.99th=[44827] 00:10:11.010 bw ( KiB/s): min= 96, max= 104, per=0.25%, avg=99.20, stdev= 4.38, samples=5 00:10:11.010 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:10:11.010 lat (usec) : 500=4.05% 00:10:11.010 lat (msec) : 50=94.59% 00:10:11.010 cpu : usr=0.10%, sys=0.00%, ctx=75, majf=0, minf=2 00:10:11.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.010 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.010 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.010 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189489: Fri Dec 13 05:26:10 2024 00:10:11.010 read: IOPS=3134, BW=12.2MiB/s (12.8MB/s)(32.7MiB/2670msec) 00:10:11.010 slat (nsec): min=6334, max=33060, avg=7389.01, stdev=1174.38 00:10:11.010 clat (usec): min=182, max=41958, avg=309.39, stdev=1728.80 00:10:11.010 lat (usec): min=189, max=41980, avg=316.78, stdev=1729.11 00:10:11.010 clat percentiles (usec): 00:10:11.010 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:10:11.010 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 243], 00:10:11.010 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:10:11.011 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[41157], 99.95th=[41157], 00:10:11.011 | 99.99th=[42206] 00:10:11.011 bw ( KiB/s): min=10192, max=14184, per=31.49%, avg=12422.40, stdev=1708.83, samples=5 00:10:11.011 iops : min= 2548, max= 3546, avg=3105.60, stdev=427.21, samples=5 00:10:11.011 lat (usec) : 250=73.57%, 500=26.20%, 750=0.01% 00:10:11.011 lat (msec) : 2=0.01%, 4=0.01%, 50=0.18% 00:10:11.011 cpu : usr=0.90%, sys=2.85%, ctx=8369, majf=0, minf=2 00:10:11.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.011 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.011 issued rwts: total=8369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.011 00:10:11.011 Run status group 0 (all jobs): 00:10:11.011 READ: bw=38.5MiB/s (40.4MB/s), 100KiB/s-15.2MiB/s (103kB/s-15.9MB/s), io=127MiB (133MB), run=2670-3297msec 00:10:11.011 00:10:11.011 Disk stats (read/write): 00:10:11.011 nvme0n1: ios=11218/0, merge=0/0, ticks=2898/0, in_queue=2898, util=93.59% 00:10:11.011 nvme0n2: ios=12183/0, merge=0/0, ticks=3431/0, in_queue=3431, util=99.19% 00:10:11.011 nvme0n3: ios=105/0, merge=0/0, ticks=3324/0, in_queue=3324, util=99.73% 00:10:11.011 nvme0n4: ios=7970/0, merge=0/0, ticks=2456/0, in_queue=2456, util=96.38% 00:10:11.011 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:11.011 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:11.270 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:11.270 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:11.529 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:11.529 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:11.789 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:11.789 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 189169 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:12.048 nvmf hotplug test: fio failed as expected 00:10:12.048 05:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.308 rmmod nvme_tcp 00:10:12.308 rmmod nvme_fabrics 00:10:12.308 rmmod nvme_keyring 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 186472 ']' 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 186472 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 186472 ']' 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 186472 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 186472 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 186472' 00:10:12.308 killing process with pid 186472 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 186472 00:10:12.308 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 186472 00:10:12.568 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:12.568 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:12.568 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:12.568 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:12.568 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:12.568 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:12.569 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:12.569 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:12.569 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:12.569 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.569 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.569 05:26:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.109 00:10:15.109 real 0m26.826s 00:10:15.109 user 1m46.724s 00:10:15.109 sys 0m8.651s 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.109 ************************************ 00:10:15.109 END TEST nvmf_fio_target 00:10:15.109 ************************************ 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.109 ************************************ 00:10:15.109 START TEST nvmf_bdevio 00:10:15.109 ************************************ 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:15.109 * Looking for test storage... 00:10:15.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:15.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.109 --rc genhtml_branch_coverage=1 00:10:15.109 --rc genhtml_function_coverage=1 00:10:15.109 --rc genhtml_legend=1 00:10:15.109 --rc geninfo_all_blocks=1 00:10:15.109 --rc geninfo_unexecuted_blocks=1 00:10:15.109 00:10:15.109 ' 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:15.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.109 --rc genhtml_branch_coverage=1 00:10:15.109 --rc genhtml_function_coverage=1 00:10:15.109 --rc genhtml_legend=1 00:10:15.109 --rc geninfo_all_blocks=1 00:10:15.109 --rc geninfo_unexecuted_blocks=1 00:10:15.109 00:10:15.109 ' 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:15.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.109 --rc genhtml_branch_coverage=1 00:10:15.109 --rc genhtml_function_coverage=1 00:10:15.109 --rc genhtml_legend=1 00:10:15.109 --rc geninfo_all_blocks=1 00:10:15.109 --rc geninfo_unexecuted_blocks=1 00:10:15.109 00:10:15.109 ' 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:15.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.109 --rc genhtml_branch_coverage=1 00:10:15.109 --rc genhtml_function_coverage=1 00:10:15.109 --rc genhtml_legend=1 00:10:15.109 --rc geninfo_all_blocks=1 00:10:15.109 --rc geninfo_unexecuted_blocks=1 00:10:15.109 00:10:15.109 ' 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.109 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:15.110 05:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:21.685 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:21.685 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:21.685 Found net devices under 0000:af:00.0: cvl_0_0 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:21.685 Found net devices under 0000:af:00.1: cvl_0_1 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.685 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:21.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:10:21.686 00:10:21.686 --- 10.0.0.2 ping statistics --- 00:10:21.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.686 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:10:21.686 00:10:21.686 --- 10.0.0.1 ping statistics --- 00:10:21.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.686 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=193659 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 193659 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 193659 ']' 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.686 [2024-12-13 05:26:20.772014] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:21.686 [2024-12-13 05:26:20.772064] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.686 [2024-12-13 05:26:20.850129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.686 [2024-12-13 05:26:20.873769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.686 [2024-12-13 05:26:20.873804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.686 [2024-12-13 05:26:20.873812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.686 [2024-12-13 05:26:20.873818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.686 [2024-12-13 05:26:20.873823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.686 [2024-12-13 05:26:20.875296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:21.686 [2024-12-13 05:26:20.875402] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:21.686 [2024-12-13 05:26:20.875510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.686 [2024-12-13 05:26:20.875510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:21.686 05:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.686 [2024-12-13 05:26:21.019064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.686 Malloc0 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.686 [2024-12-13 05:26:21.081278] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:21.686 { 00:10:21.686 "params": { 00:10:21.686 "name": "Nvme$subsystem", 00:10:21.686 "trtype": "$TEST_TRANSPORT", 00:10:21.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:21.686 "adrfam": "ipv4", 00:10:21.686 "trsvcid": "$NVMF_PORT", 00:10:21.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:21.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:21.686 "hdgst": ${hdgst:-false}, 00:10:21.686 "ddgst": ${ddgst:-false} 00:10:21.686 }, 00:10:21.686 "method": "bdev_nvme_attach_controller" 00:10:21.686 } 00:10:21.686 EOF 00:10:21.686 )") 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:21.686 05:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:21.686 "params": { 00:10:21.686 "name": "Nvme1", 00:10:21.686 "trtype": "tcp", 00:10:21.686 "traddr": "10.0.0.2", 00:10:21.686 "adrfam": "ipv4", 00:10:21.686 "trsvcid": "4420", 00:10:21.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:21.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:21.686 "hdgst": false, 00:10:21.686 "ddgst": false 00:10:21.686 }, 00:10:21.686 "method": "bdev_nvme_attach_controller" 00:10:21.686 }' 00:10:21.686 [2024-12-13 05:26:21.131113] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:21.686 [2024-12-13 05:26:21.131157] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193820 ] 00:10:21.686 [2024-12-13 05:26:21.205673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:21.686 [2024-12-13 05:26:21.230782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.686 [2024-12-13 05:26:21.230887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.686 [2024-12-13 05:26:21.230888] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.686 I/O targets: 00:10:21.686 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:21.686 00:10:21.686 00:10:21.686 CUnit - A unit testing framework for C - Version 2.1-3 00:10:21.687 http://cunit.sourceforge.net/ 00:10:21.687 00:10:21.687 00:10:21.687 Suite: bdevio tests on: Nvme1n1 00:10:21.687 Test: blockdev write read block ...passed 00:10:21.687 Test: blockdev write zeroes read block ...passed 00:10:21.687 Test: blockdev write zeroes read no split ...passed 00:10:21.687 Test: blockdev write zeroes read split ...passed 00:10:21.687 Test: blockdev write zeroes read split partial ...passed 00:10:21.687 Test: blockdev reset ...[2024-12-13 05:26:21.663791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:21.687 [2024-12-13 05:26:21.663853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2205630 (9): Bad file descriptor 00:10:21.945 [2024-12-13 05:26:21.799548] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:21.945 passed 00:10:21.945 Test: blockdev write read 8 blocks ...passed 00:10:21.945 Test: blockdev write read size > 128k ...passed 00:10:21.945 Test: blockdev write read invalid size ...passed 00:10:21.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:21.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:21.945 Test: blockdev write read max offset ...passed 00:10:22.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:22.204 Test: blockdev writev readv 8 blocks ...passed 00:10:22.204 Test: blockdev writev readv 30 x 1block ...passed 00:10:22.204 Test: blockdev writev readv block ...passed 00:10:22.204 Test: blockdev writev readv size > 128k ...passed 00:10:22.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:22.204 Test: blockdev comparev and writev ...[2024-12-13 05:26:22.052159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:22.204 [2024-12-13 05:26:22.052191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:22.204 [2024-12-13 05:26:22.052205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:22.204 [2024-12-13 05:26:22.052213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:22.204 [2024-12-13 05:26:22.052461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:22.204 [2024-12-13 05:26:22.052471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:22.204 [2024-12-13 05:26:22.052483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:22.204 [2024-12-13 05:26:22.052494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:22.204 [2024-12-13 05:26:22.052727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:22.204 [2024-12-13 05:26:22.052736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:22.204 [2024-12-13 05:26:22.052747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:22.204 [2024-12-13 05:26:22.052754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:22.204 [2024-12-13 05:26:22.052989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:22.204 [2024-12-13 05:26:22.052998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:22.204 [2024-12-13 05:26:22.053009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:22.204 [2024-12-13 05:26:22.053016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:22.204 passed 00:10:22.204 Test: blockdev nvme passthru rw ...passed 00:10:22.204 Test: blockdev nvme passthru vendor specific ...[2024-12-13 05:26:22.136792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:22.204 [2024-12-13 05:26:22.136809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:22.205 [2024-12-13 05:26:22.136908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:22.205 [2024-12-13 05:26:22.136918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:22.205 [2024-12-13 05:26:22.137009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:22.205 [2024-12-13 05:26:22.137018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:22.205 [2024-12-13 05:26:22.137119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:22.205 [2024-12-13 05:26:22.137129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:22.205 passed 00:10:22.205 Test: blockdev nvme admin passthru ...passed 00:10:22.205 Test: blockdev copy ...passed 00:10:22.205 00:10:22.205 Run Summary: Type Total Ran Passed Failed Inactive 00:10:22.205 suites 1 1 n/a 0 0 00:10:22.205 tests 23 23 23 0 0 00:10:22.205 asserts 152 152 152 0 n/a 00:10:22.205 00:10:22.205 Elapsed time = 1.377 seconds 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:22.464 rmmod nvme_tcp 00:10:22.464 rmmod nvme_fabrics 00:10:22.464 rmmod nvme_keyring 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 193659 ']' 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 193659 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 193659 ']' 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 193659 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 193659 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 193659' 00:10:22.464 killing process with pid 193659 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 193659 00:10:22.464 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 193659 00:10:22.724 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.724 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:22.724 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:22.724 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:22.724 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:22.724 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:22.724 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:22.724 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.724 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:22.724 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.724 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.724 05:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.265 05:26:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:25.265 00:10:25.265 real 0m10.126s 00:10:25.265 user 0m11.397s 00:10:25.265 sys 0m4.875s 00:10:25.265 05:26:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.265 05:26:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:25.265 ************************************ 00:10:25.265 END TEST nvmf_bdevio 00:10:25.265 ************************************ 00:10:25.265 05:26:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:25.265 00:10:25.265 real 4m33.859s 00:10:25.265 user 10m28.095s 00:10:25.265 sys 1m37.018s 00:10:25.265 05:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.265 05:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:25.265 ************************************ 00:10:25.265 END TEST nvmf_target_core 00:10:25.265 ************************************ 00:10:25.265 05:26:24 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:25.265 05:26:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:25.266 05:26:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.266 05:26:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:25.266 ************************************ 00:10:25.266 START TEST nvmf_target_extra 00:10:25.266 ************************************ 00:10:25.266 05:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:25.266 * Looking for test storage... 00:10:25.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:25.266 05:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:25.266 05:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:25.266 05:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:25.266 05:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:25.266 05:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.266 05:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.266 05:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.266 05:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.266 05:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.266 05:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.266 05:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:25.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.266 --rc genhtml_branch_coverage=1 00:10:25.266 --rc genhtml_function_coverage=1 00:10:25.266 --rc genhtml_legend=1 00:10:25.266 --rc geninfo_all_blocks=1 00:10:25.266 --rc geninfo_unexecuted_blocks=1 00:10:25.266 00:10:25.266 ' 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:25.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.266 --rc genhtml_branch_coverage=1 00:10:25.266 --rc genhtml_function_coverage=1 00:10:25.266 --rc genhtml_legend=1 00:10:25.266 --rc geninfo_all_blocks=1 00:10:25.266 --rc geninfo_unexecuted_blocks=1 00:10:25.266 00:10:25.266 ' 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:25.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.266 --rc genhtml_branch_coverage=1 00:10:25.266 --rc genhtml_function_coverage=1 00:10:25.266 --rc genhtml_legend=1 00:10:25.266 --rc geninfo_all_blocks=1 00:10:25.266 --rc geninfo_unexecuted_blocks=1 00:10:25.266 00:10:25.266 ' 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:25.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.266 --rc genhtml_branch_coverage=1 00:10:25.266 --rc genhtml_function_coverage=1 00:10:25.266 --rc genhtml_legend=1 00:10:25.266 --rc geninfo_all_blocks=1 00:10:25.266 --rc geninfo_unexecuted_blocks=1 00:10:25.266 00:10:25.266 ' 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.266 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:25.267 ************************************ 00:10:25.267 START TEST nvmf_example 00:10:25.267 ************************************ 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:25.267 * Looking for test storage... 00:10:25.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:25.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.267 --rc genhtml_branch_coverage=1 00:10:25.267 --rc genhtml_function_coverage=1 00:10:25.267 --rc genhtml_legend=1 00:10:25.267 --rc geninfo_all_blocks=1 00:10:25.267 --rc geninfo_unexecuted_blocks=1 00:10:25.267 00:10:25.267 ' 00:10:25.267 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:25.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.267 --rc genhtml_branch_coverage=1 00:10:25.267 --rc genhtml_function_coverage=1 00:10:25.267 --rc genhtml_legend=1 00:10:25.268 --rc geninfo_all_blocks=1 00:10:25.268 --rc geninfo_unexecuted_blocks=1 00:10:25.268 00:10:25.268 ' 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:25.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.268 --rc genhtml_branch_coverage=1 00:10:25.268 --rc genhtml_function_coverage=1 00:10:25.268 --rc genhtml_legend=1 00:10:25.268 --rc geninfo_all_blocks=1 00:10:25.268 --rc geninfo_unexecuted_blocks=1 00:10:25.268 00:10:25.268 ' 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:25.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.268 --rc genhtml_branch_coverage=1 00:10:25.268 --rc genhtml_function_coverage=1 00:10:25.268 --rc genhtml_legend=1 00:10:25.268 --rc geninfo_all_blocks=1 00:10:25.268 --rc geninfo_unexecuted_blocks=1 00:10:25.268 00:10:25.268 ' 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.268 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.528 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:25.529 05:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:32.104 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:32.105 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:32.105 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:32.105 Found net devices under 0000:af:00.0: cvl_0_0 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:32.105 Found net devices under 0000:af:00.1: cvl_0_1 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.105 05:26:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:32.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:10:32.105 00:10:32.105 --- 10.0.0.2 ping statistics --- 00:10:32.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.105 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:10:32.105 00:10:32.105 --- 10.0.0.1 ping statistics --- 00:10:32.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.105 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=197651 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 197651 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 197651 ']' 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.105 05:26:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:32.365 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:44.576 Initializing NVMe Controllers 00:10:44.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:44.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:44.576 Initialization complete. Launching workers. 00:10:44.576 ======================================================== 00:10:44.576 Latency(us) 00:10:44.576 Device Information : IOPS MiB/s Average min max 00:10:44.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18208.44 71.13 3514.29 682.19 15939.20 00:10:44.576 ======================================================== 00:10:44.576 Total : 18208.44 71.13 3514.29 682.19 15939.20 00:10:44.576 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.576 rmmod nvme_tcp 00:10:44.576 rmmod nvme_fabrics 00:10:44.576 rmmod nvme_keyring 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 197651 ']' 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 197651 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 197651 ']' 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 197651 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 197651 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 197651' 00:10:44.576 killing process with pid 197651 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 197651 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 197651 00:10:44.576 nvmf threads initialize successfully 00:10:44.576 bdev subsystem init successfully 00:10:44.576 created a nvmf target service 00:10:44.576 create targets's poll groups done 00:10:44.576 all subsystems of target started 00:10:44.576 nvmf target is running 00:10:44.576 all subsystems of target stopped 00:10:44.576 destroy targets's poll groups done 00:10:44.576 destroyed the nvmf target service 00:10:44.576 bdev subsystem finish successfully 00:10:44.576 nvmf threads destroy successfully 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:44.576 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:44.577 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.577 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:44.577 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.577 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.577 05:26:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.145 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:45.145 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:45.145 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.145 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.145 00:10:45.145 real 0m19.915s 00:10:45.145 user 0m46.646s 00:10:45.145 sys 0m5.998s 00:10:45.145 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.145 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.145 ************************************ 00:10:45.145 END TEST nvmf_example 00:10:45.145 ************************************ 00:10:45.145 05:26:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:45.145 05:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:45.145 05:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.145 05:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:45.145 ************************************ 00:10:45.145 START TEST nvmf_filesystem 00:10:45.145 ************************************ 00:10:45.145 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:45.408 * Looking for test storage... 00:10:45.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.408 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:45.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.409 --rc genhtml_branch_coverage=1 00:10:45.409 --rc genhtml_function_coverage=1 00:10:45.409 --rc genhtml_legend=1 00:10:45.409 --rc geninfo_all_blocks=1 00:10:45.409 --rc geninfo_unexecuted_blocks=1 00:10:45.409 00:10:45.409 ' 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:45.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.409 --rc genhtml_branch_coverage=1 00:10:45.409 --rc genhtml_function_coverage=1 00:10:45.409 --rc genhtml_legend=1 00:10:45.409 --rc geninfo_all_blocks=1 00:10:45.409 --rc geninfo_unexecuted_blocks=1 00:10:45.409 00:10:45.409 ' 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:45.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.409 --rc genhtml_branch_coverage=1 00:10:45.409 --rc genhtml_function_coverage=1 00:10:45.409 --rc genhtml_legend=1 00:10:45.409 --rc geninfo_all_blocks=1 00:10:45.409 --rc geninfo_unexecuted_blocks=1 00:10:45.409 00:10:45.409 ' 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:45.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.409 --rc genhtml_branch_coverage=1 00:10:45.409 --rc genhtml_function_coverage=1 00:10:45.409 --rc genhtml_legend=1 00:10:45.409 --rc geninfo_all_blocks=1 00:10:45.409 --rc geninfo_unexecuted_blocks=1 00:10:45.409 00:10:45.409 ' 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:45.409 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:45.410 #define SPDK_CONFIG_H 00:10:45.410 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:45.410 #define SPDK_CONFIG_APPS 1 00:10:45.410 #define SPDK_CONFIG_ARCH native 00:10:45.410 #undef SPDK_CONFIG_ASAN 00:10:45.410 #undef SPDK_CONFIG_AVAHI 00:10:45.410 #undef SPDK_CONFIG_CET 00:10:45.410 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:45.410 #define SPDK_CONFIG_COVERAGE 1 00:10:45.410 #define SPDK_CONFIG_CROSS_PREFIX 00:10:45.410 #undef SPDK_CONFIG_CRYPTO 00:10:45.410 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:45.410 #undef SPDK_CONFIG_CUSTOMOCF 00:10:45.410 #undef SPDK_CONFIG_DAOS 00:10:45.410 #define SPDK_CONFIG_DAOS_DIR 00:10:45.410 #define SPDK_CONFIG_DEBUG 1 00:10:45.410 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:45.410 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:45.410 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:45.410 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:45.410 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:45.410 #undef SPDK_CONFIG_DPDK_UADK 00:10:45.410 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:45.410 #define SPDK_CONFIG_EXAMPLES 1 00:10:45.410 #undef SPDK_CONFIG_FC 00:10:45.410 #define SPDK_CONFIG_FC_PATH 00:10:45.410 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:45.410 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:45.410 #define SPDK_CONFIG_FSDEV 1 00:10:45.410 #undef SPDK_CONFIG_FUSE 00:10:45.410 #undef SPDK_CONFIG_FUZZER 00:10:45.410 #define SPDK_CONFIG_FUZZER_LIB 00:10:45.410 #undef SPDK_CONFIG_GOLANG 00:10:45.410 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:45.410 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:45.410 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:45.410 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:45.410 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:45.410 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:45.410 #undef SPDK_CONFIG_HAVE_LZ4 00:10:45.410 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:45.410 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:45.410 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:45.410 #define SPDK_CONFIG_IDXD 1 00:10:45.410 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:45.410 #undef SPDK_CONFIG_IPSEC_MB 00:10:45.410 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:45.410 #define SPDK_CONFIG_ISAL 1 00:10:45.410 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:45.410 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:45.410 #define SPDK_CONFIG_LIBDIR 00:10:45.410 #undef SPDK_CONFIG_LTO 00:10:45.410 #define SPDK_CONFIG_MAX_LCORES 128 00:10:45.410 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:45.410 #define SPDK_CONFIG_NVME_CUSE 1 00:10:45.410 #undef SPDK_CONFIG_OCF 00:10:45.410 #define SPDK_CONFIG_OCF_PATH 00:10:45.410 #define SPDK_CONFIG_OPENSSL_PATH 00:10:45.410 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:45.410 #define SPDK_CONFIG_PGO_DIR 00:10:45.410 #undef SPDK_CONFIG_PGO_USE 00:10:45.410 #define SPDK_CONFIG_PREFIX /usr/local 00:10:45.410 #undef SPDK_CONFIG_RAID5F 00:10:45.410 #undef SPDK_CONFIG_RBD 00:10:45.410 #define SPDK_CONFIG_RDMA 1 00:10:45.410 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:45.410 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:45.410 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:45.410 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:45.410 #define SPDK_CONFIG_SHARED 1 00:10:45.410 #undef SPDK_CONFIG_SMA 00:10:45.410 #define SPDK_CONFIG_TESTS 1 00:10:45.410 #undef SPDK_CONFIG_TSAN 00:10:45.410 #define SPDK_CONFIG_UBLK 1 00:10:45.410 #define SPDK_CONFIG_UBSAN 1 00:10:45.410 #undef SPDK_CONFIG_UNIT_TESTS 00:10:45.410 #undef SPDK_CONFIG_URING 00:10:45.410 #define SPDK_CONFIG_URING_PATH 00:10:45.410 #undef SPDK_CONFIG_URING_ZNS 00:10:45.410 #undef SPDK_CONFIG_USDT 00:10:45.410 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:45.410 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:45.410 #define SPDK_CONFIG_VFIO_USER 1 00:10:45.410 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:45.410 #define SPDK_CONFIG_VHOST 1 00:10:45.410 #define SPDK_CONFIG_VIRTIO 1 00:10:45.410 #undef SPDK_CONFIG_VTUNE 00:10:45.410 #define SPDK_CONFIG_VTUNE_DIR 00:10:45.410 #define SPDK_CONFIG_WERROR 1 00:10:45.410 #define SPDK_CONFIG_WPDK_DIR 00:10:45.410 #undef SPDK_CONFIG_XNVME 00:10:45.410 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:45.410 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:45.411 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:45.412 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 199991 ]] 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 199991 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.vXfnbZ 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.vXfnbZ/tests/target /tmp/spdk.vXfnbZ 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88905306112 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552405504 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6647099392 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766171648 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087470592 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110481920 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47775887360 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=315392 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:45.413 * Looking for test storage... 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:45.413 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88905306112 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8861691904 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:45.414 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:45.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.674 --rc genhtml_branch_coverage=1 00:10:45.674 --rc genhtml_function_coverage=1 00:10:45.674 --rc genhtml_legend=1 00:10:45.674 --rc geninfo_all_blocks=1 00:10:45.674 --rc geninfo_unexecuted_blocks=1 00:10:45.674 00:10:45.674 ' 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:45.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.674 --rc genhtml_branch_coverage=1 00:10:45.674 --rc genhtml_function_coverage=1 00:10:45.674 --rc genhtml_legend=1 00:10:45.674 --rc geninfo_all_blocks=1 00:10:45.674 --rc geninfo_unexecuted_blocks=1 00:10:45.674 00:10:45.674 ' 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:45.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.674 --rc genhtml_branch_coverage=1 00:10:45.674 --rc genhtml_function_coverage=1 00:10:45.674 --rc genhtml_legend=1 00:10:45.674 --rc geninfo_all_blocks=1 00:10:45.674 --rc geninfo_unexecuted_blocks=1 00:10:45.674 00:10:45.674 ' 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:45.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.674 --rc genhtml_branch_coverage=1 00:10:45.674 --rc genhtml_function_coverage=1 00:10:45.674 --rc genhtml_legend=1 00:10:45.674 --rc geninfo_all_blocks=1 00:10:45.674 --rc geninfo_unexecuted_blocks=1 00:10:45.674 00:10:45.674 ' 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.674 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:45.675 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.250 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:52.251 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:52.251 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:52.251 Found net devices under 0000:af:00.0: cvl_0_0 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:52.251 Found net devices under 0000:af:00.1: cvl_0_1 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:52.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:10:52.251 00:10:52.251 --- 10.0.0.2 ping statistics --- 00:10:52.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.251 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:52.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:10:52.251 00:10:52.251 --- 10.0.0.1 ping statistics --- 00:10:52.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.251 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.251 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 ************************************ 00:10:52.252 START TEST nvmf_filesystem_no_in_capsule 00:10:52.252 ************************************ 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=203200 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 203200 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 203200 ']' 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 [2024-12-13 05:26:51.697099] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:52.252 [2024-12-13 05:26:51.697146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.252 [2024-12-13 05:26:51.777156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.252 [2024-12-13 05:26:51.800532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.252 [2024-12-13 05:26:51.800570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.252 [2024-12-13 05:26:51.800577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.252 [2024-12-13 05:26:51.800583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.252 [2024-12-13 05:26:51.800587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.252 [2024-12-13 05:26:51.802102] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.252 [2024-12-13 05:26:51.802210] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.252 [2024-12-13 05:26:51.802302] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.252 [2024-12-13 05:26:51.802304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 [2024-12-13 05:26:51.942523] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.252 05:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 Malloc1 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 [2024-12-13 05:26:52.113589] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.252 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:52.252 { 00:10:52.252 "name": "Malloc1", 00:10:52.252 "aliases": [ 00:10:52.252 "dd0bfa79-254f-4696-aa27-feebbe78034f" 00:10:52.252 ], 00:10:52.252 "product_name": "Malloc disk", 00:10:52.252 "block_size": 512, 00:10:52.252 "num_blocks": 1048576, 00:10:52.252 "uuid": "dd0bfa79-254f-4696-aa27-feebbe78034f", 00:10:52.252 "assigned_rate_limits": { 00:10:52.252 "rw_ios_per_sec": 0, 00:10:52.252 "rw_mbytes_per_sec": 0, 00:10:52.252 "r_mbytes_per_sec": 0, 00:10:52.252 "w_mbytes_per_sec": 0 00:10:52.252 }, 00:10:52.252 "claimed": true, 00:10:52.252 "claim_type": "exclusive_write", 00:10:52.252 "zoned": false, 00:10:52.252 "supported_io_types": { 00:10:52.252 "read": true, 00:10:52.252 "write": true, 00:10:52.252 "unmap": true, 00:10:52.252 "flush": true, 00:10:52.252 "reset": true, 00:10:52.252 "nvme_admin": false, 00:10:52.252 "nvme_io": false, 00:10:52.252 "nvme_io_md": false, 00:10:52.252 "write_zeroes": true, 00:10:52.252 "zcopy": true, 00:10:52.252 "get_zone_info": false, 00:10:52.252 "zone_management": false, 00:10:52.253 "zone_append": false, 00:10:52.253 "compare": false, 00:10:52.253 "compare_and_write": false, 00:10:52.253 "abort": true, 00:10:52.253 "seek_hole": false, 00:10:52.253 "seek_data": false, 00:10:52.253 "copy": true, 00:10:52.253 "nvme_iov_md": false 00:10:52.253 }, 00:10:52.253 "memory_domains": [ 00:10:52.253 { 00:10:52.253 "dma_device_id": "system", 00:10:52.253 "dma_device_type": 1 00:10:52.253 }, 00:10:52.253 { 00:10:52.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.253 "dma_device_type": 2 00:10:52.253 } 00:10:52.253 ], 00:10:52.253 "driver_specific": {} 00:10:52.253 } 00:10:52.253 ]' 00:10:52.253 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:52.253 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:52.253 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:52.253 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:52.253 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:52.253 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:52.253 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:52.253 05:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.632 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.632 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:53.632 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.632 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:53.632 05:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:55.538 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:55.797 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:56.734 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:57.672 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:57.672 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:57.672 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:57.672 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.672 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.672 ************************************ 00:10:57.672 START TEST filesystem_ext4 00:10:57.672 ************************************ 00:10:57.672 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:57.672 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:57.672 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:57.672 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:57.672 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:57.673 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:57.673 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:57.673 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:57.673 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:57.673 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:57.673 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:57.673 mke2fs 1.47.0 (5-Feb-2023) 00:10:57.673 Discarding device blocks: 0/522240 done 00:10:57.673 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:57.673 Filesystem UUID: 56a15460-bb97-425f-915a-a5f170666a37 00:10:57.673 Superblock backups stored on blocks: 00:10:57.673 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:57.673 00:10:57.673 Allocating group tables: 0/64 done 00:10:57.673 Writing inode tables: 0/64 done 00:10:57.673 Creating journal (8192 blocks): done 00:10:57.673 Writing superblocks and filesystem accounting information: 0/64 done 00:10:57.673 00:10:57.673 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:57.673 05:26:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:02.946 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:02.946 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:02.946 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:02.946 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:02.946 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:02.946 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.205 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 203200 00:11:03.205 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.205 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.205 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.205 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.205 00:11:03.205 real 0m5.550s 00:11:03.205 user 0m0.029s 00:11:03.205 sys 0m0.106s 00:11:03.205 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.205 05:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:03.205 ************************************ 00:11:03.205 END TEST filesystem_ext4 00:11:03.205 ************************************ 00:11:03.205 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:03.205 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:03.206 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.206 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.206 ************************************ 00:11:03.206 START TEST filesystem_btrfs 00:11:03.206 ************************************ 00:11:03.206 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:03.206 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:03.206 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.206 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:03.206 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:03.206 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:03.206 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:03.206 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:03.206 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:03.206 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:03.206 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:03.465 btrfs-progs v6.8.1 00:11:03.465 See https://btrfs.readthedocs.io for more information. 00:11:03.465 00:11:03.465 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:03.465 NOTE: several default settings have changed in version 5.15, please make sure 00:11:03.465 this does not affect your deployments: 00:11:03.465 - DUP for metadata (-m dup) 00:11:03.465 - enabled no-holes (-O no-holes) 00:11:03.465 - enabled free-space-tree (-R free-space-tree) 00:11:03.465 00:11:03.465 Label: (null) 00:11:03.465 UUID: e19751ef-6e63-4226-b7a7-fabf29d56dd8 00:11:03.465 Node size: 16384 00:11:03.465 Sector size: 4096 (CPU page size: 4096) 00:11:03.465 Filesystem size: 510.00MiB 00:11:03.465 Block group profiles: 00:11:03.465 Data: single 8.00MiB 00:11:03.465 Metadata: DUP 32.00MiB 00:11:03.465 System: DUP 8.00MiB 00:11:03.465 SSD detected: yes 00:11:03.465 Zoned device: no 00:11:03.465 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:03.465 Checksum: crc32c 00:11:03.465 Number of devices: 1 00:11:03.465 Devices: 00:11:03.465 ID SIZE PATH 00:11:03.465 1 510.00MiB /dev/nvme0n1p1 00:11:03.465 00:11:03.465 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:03.465 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:03.465 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:03.465 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:03.465 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 203200 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.724 00:11:03.724 real 0m0.465s 00:11:03.724 user 0m0.026s 00:11:03.724 sys 0m0.157s 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:03.724 ************************************ 00:11:03.724 END TEST filesystem_btrfs 00:11:03.724 ************************************ 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.724 ************************************ 00:11:03.724 START TEST filesystem_xfs 00:11:03.724 ************************************ 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:03.724 05:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:03.724 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:03.724 = sectsz=512 attr=2, projid32bit=1 00:11:03.724 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:03.724 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:03.724 data = bsize=4096 blocks=130560, imaxpct=25 00:11:03.724 = sunit=0 swidth=0 blks 00:11:03.724 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:03.724 log =internal log bsize=4096 blocks=16384, version=2 00:11:03.724 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:03.724 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:04.662 Discarding blocks...Done. 00:11:04.662 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:04.662 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.948 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.948 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:07.948 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.948 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:07.948 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:07.948 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.948 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 203200 00:11:07.948 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.948 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.948 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.948 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.948 00:11:07.948 real 0m3.820s 00:11:07.948 user 0m0.032s 00:11:07.948 sys 0m0.114s 00:11:07.948 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.949 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:07.949 ************************************ 00:11:07.949 END TEST filesystem_xfs 00:11:07.949 ************************************ 00:11:07.949 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:07.949 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:07.949 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.949 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.949 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:07.949 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:07.949 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 203200 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 203200 ']' 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 203200 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.208 05:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203200 00:11:08.208 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.208 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.208 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203200' 00:11:08.208 killing process with pid 203200 00:11:08.208 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 203200 00:11:08.208 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 203200 00:11:08.467 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:08.467 00:11:08.467 real 0m16.721s 00:11:08.467 user 1m5.818s 00:11:08.467 sys 0m1.556s 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.468 ************************************ 00:11:08.468 END TEST nvmf_filesystem_no_in_capsule 00:11:08.468 ************************************ 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.468 ************************************ 00:11:08.468 START TEST nvmf_filesystem_in_capsule 00:11:08.468 ************************************ 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=206375 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 206375 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 206375 ']' 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.468 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.727 [2024-12-13 05:27:08.489121] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:08.727 [2024-12-13 05:27:08.489161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.727 [2024-12-13 05:27:08.565084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.727 [2024-12-13 05:27:08.587979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.727 [2024-12-13 05:27:08.588017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.727 [2024-12-13 05:27:08.588024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.727 [2024-12-13 05:27:08.588029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.727 [2024-12-13 05:27:08.588034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.727 [2024-12-13 05:27:08.589491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.727 [2024-12-13 05:27:08.589582] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.727 [2024-12-13 05:27:08.589686] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.727 [2024-12-13 05:27:08.589688] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.727 [2024-12-13 05:27:08.725422] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.727 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.987 Malloc1 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.987 [2024-12-13 05:27:08.878603] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:08.987 { 00:11:08.987 "name": "Malloc1", 00:11:08.987 "aliases": [ 00:11:08.987 "4a0f1648-975f-4610-8d9e-966a2b11014c" 00:11:08.987 ], 00:11:08.987 "product_name": "Malloc disk", 00:11:08.987 "block_size": 512, 00:11:08.987 "num_blocks": 1048576, 00:11:08.987 "uuid": "4a0f1648-975f-4610-8d9e-966a2b11014c", 00:11:08.987 "assigned_rate_limits": { 00:11:08.987 "rw_ios_per_sec": 0, 00:11:08.987 "rw_mbytes_per_sec": 0, 00:11:08.987 "r_mbytes_per_sec": 0, 00:11:08.987 "w_mbytes_per_sec": 0 00:11:08.987 }, 00:11:08.987 "claimed": true, 00:11:08.987 "claim_type": "exclusive_write", 00:11:08.987 "zoned": false, 00:11:08.987 "supported_io_types": { 00:11:08.987 "read": true, 00:11:08.987 "write": true, 00:11:08.987 "unmap": true, 00:11:08.987 "flush": true, 00:11:08.987 "reset": true, 00:11:08.987 "nvme_admin": false, 00:11:08.987 "nvme_io": false, 00:11:08.987 "nvme_io_md": false, 00:11:08.987 "write_zeroes": true, 00:11:08.987 "zcopy": true, 00:11:08.987 "get_zone_info": false, 00:11:08.987 "zone_management": false, 00:11:08.987 "zone_append": false, 00:11:08.987 "compare": false, 00:11:08.987 "compare_and_write": false, 00:11:08.987 "abort": true, 00:11:08.987 "seek_hole": false, 00:11:08.987 "seek_data": false, 00:11:08.987 "copy": true, 00:11:08.987 "nvme_iov_md": false 00:11:08.987 }, 00:11:08.987 "memory_domains": [ 00:11:08.987 { 00:11:08.987 "dma_device_id": "system", 00:11:08.987 "dma_device_type": 1 00:11:08.987 }, 00:11:08.987 { 00:11:08.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.987 "dma_device_type": 2 00:11:08.987 } 00:11:08.987 ], 00:11:08.987 "driver_specific": {} 00:11:08.987 } 00:11:08.987 ]' 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:08.987 05:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:10.364 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:10.364 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:10.364 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.364 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:10.364 05:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:12.269 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:12.527 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:13.095 05:27:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:14.032 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:14.032 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:14.032 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:14.032 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.032 05:27:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.032 ************************************ 00:11:14.032 START TEST filesystem_in_capsule_ext4 00:11:14.032 ************************************ 00:11:14.032 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:14.032 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:14.032 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:14.032 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:14.032 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:14.032 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:14.032 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:14.032 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:14.032 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:14.032 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:14.032 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:14.032 mke2fs 1.47.0 (5-Feb-2023) 00:11:14.291 Discarding device blocks: 0/522240 done 00:11:14.291 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:14.291 Filesystem UUID: 96d4b491-a8c9-4335-bd82-b14d962c5aae 00:11:14.291 Superblock backups stored on blocks: 00:11:14.291 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:14.291 00:11:14.291 Allocating group tables: 0/64 done 00:11:14.291 Writing inode tables: 0/64 done 00:11:14.291 Creating journal (8192 blocks): done 00:11:14.291 Writing superblocks and filesystem accounting information: 0/64 done 00:11:14.291 00:11:14.291 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:14.291 05:27:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 206375 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:20.856 00:11:20.856 real 0m5.721s 00:11:20.856 user 0m0.033s 00:11:20.856 sys 0m0.058s 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:20.856 ************************************ 00:11:20.856 END TEST filesystem_in_capsule_ext4 00:11:20.856 ************************************ 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.856 ************************************ 00:11:20.856 START TEST filesystem_in_capsule_btrfs 00:11:20.856 ************************************ 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:20.856 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:20.857 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:20.857 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:20.857 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:20.857 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:20.857 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:20.857 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:20.857 btrfs-progs v6.8.1 00:11:20.857 See https://btrfs.readthedocs.io for more information. 00:11:20.857 00:11:20.857 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:20.857 NOTE: several default settings have changed in version 5.15, please make sure 00:11:20.857 this does not affect your deployments: 00:11:20.857 - DUP for metadata (-m dup) 00:11:20.857 - enabled no-holes (-O no-holes) 00:11:20.857 - enabled free-space-tree (-R free-space-tree) 00:11:20.857 00:11:20.857 Label: (null) 00:11:20.857 UUID: e1e91a0e-6f16-4ce7-a958-cc00d809a17e 00:11:20.857 Node size: 16384 00:11:20.857 Sector size: 4096 (CPU page size: 4096) 00:11:20.857 Filesystem size: 510.00MiB 00:11:20.857 Block group profiles: 00:11:20.857 Data: single 8.00MiB 00:11:20.857 Metadata: DUP 32.00MiB 00:11:20.857 System: DUP 8.00MiB 00:11:20.857 SSD detected: yes 00:11:20.857 Zoned device: no 00:11:20.857 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:20.857 Checksum: crc32c 00:11:20.857 Number of devices: 1 00:11:20.857 Devices: 00:11:20.857 ID SIZE PATH 00:11:20.857 1 510.00MiB /dev/nvme0n1p1 00:11:20.857 00:11:20.857 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:20.857 05:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:20.857 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:20.857 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:20.857 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:20.857 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:20.857 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:20.857 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:20.857 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 206375 00:11:20.857 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:20.857 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:20.857 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:20.857 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:20.857 00:11:20.857 real 0m1.034s 00:11:20.857 user 0m0.023s 00:11:20.857 sys 0m0.113s 00:11:20.857 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.857 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:20.857 ************************************ 00:11:20.857 END TEST filesystem_in_capsule_btrfs 00:11:20.857 ************************************ 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.116 ************************************ 00:11:21.116 START TEST filesystem_in_capsule_xfs 00:11:21.116 ************************************ 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:21.116 05:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:21.116 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:21.116 = sectsz=512 attr=2, projid32bit=1 00:11:21.116 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:21.116 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:21.116 data = bsize=4096 blocks=130560, imaxpct=25 00:11:21.116 = sunit=0 swidth=0 blks 00:11:21.116 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:21.116 log =internal log bsize=4096 blocks=16384, version=2 00:11:21.116 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:21.116 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:22.052 Discarding blocks...Done. 00:11:22.052 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:22.052 05:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 206375 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:24.586 00:11:24.586 real 0m3.385s 00:11:24.586 user 0m0.025s 00:11:24.586 sys 0m0.073s 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:24.586 ************************************ 00:11:24.586 END TEST filesystem_in_capsule_xfs 00:11:24.586 ************************************ 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:24.586 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 206375 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 206375 ']' 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 206375 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206375 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206375' 00:11:24.587 killing process with pid 206375 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 206375 00:11:24.587 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 206375 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:25.156 00:11:25.156 real 0m16.432s 00:11:25.156 user 1m4.654s 00:11:25.156 sys 0m1.400s 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.156 ************************************ 00:11:25.156 END TEST nvmf_filesystem_in_capsule 00:11:25.156 ************************************ 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.156 rmmod nvme_tcp 00:11:25.156 rmmod nvme_fabrics 00:11:25.156 rmmod nvme_keyring 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.156 05:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.063 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:27.063 00:11:27.063 real 0m41.966s 00:11:27.063 user 2m12.559s 00:11:27.063 sys 0m7.610s 00:11:27.063 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.063 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.063 ************************************ 00:11:27.063 END TEST nvmf_filesystem 00:11:27.063 ************************************ 00:11:27.063 05:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:27.063 05:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.063 05:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.063 05:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.323 ************************************ 00:11:27.323 START TEST nvmf_target_discovery 00:11:27.323 ************************************ 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:27.323 * Looking for test storage... 00:11:27.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:27.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.323 --rc genhtml_branch_coverage=1 00:11:27.323 --rc genhtml_function_coverage=1 00:11:27.323 --rc genhtml_legend=1 00:11:27.323 --rc geninfo_all_blocks=1 00:11:27.323 --rc geninfo_unexecuted_blocks=1 00:11:27.323 00:11:27.323 ' 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:27.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.323 --rc genhtml_branch_coverage=1 00:11:27.323 --rc genhtml_function_coverage=1 00:11:27.323 --rc genhtml_legend=1 00:11:27.323 --rc geninfo_all_blocks=1 00:11:27.323 --rc geninfo_unexecuted_blocks=1 00:11:27.323 00:11:27.323 ' 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:27.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.323 --rc genhtml_branch_coverage=1 00:11:27.323 --rc genhtml_function_coverage=1 00:11:27.323 --rc genhtml_legend=1 00:11:27.323 --rc geninfo_all_blocks=1 00:11:27.323 --rc geninfo_unexecuted_blocks=1 00:11:27.323 00:11:27.323 ' 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:27.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.323 --rc genhtml_branch_coverage=1 00:11:27.323 --rc genhtml_function_coverage=1 00:11:27.323 --rc genhtml_legend=1 00:11:27.323 --rc geninfo_all_blocks=1 00:11:27.323 --rc geninfo_unexecuted_blocks=1 00:11:27.323 00:11:27.323 ' 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.323 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.324 05:27:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:33.900 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:33.900 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:33.900 Found net devices under 0000:af:00.0: cvl_0_0 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:33.900 Found net devices under 0000:af:00.1: cvl_0_1 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.900 05:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.900 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.900 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.900 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.900 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.900 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.900 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.900 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.900 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:33.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:11:33.900 00:11:33.900 --- 10.0.0.2 ping statistics --- 00:11:33.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.900 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:11:33.900 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:11:33.901 00:11:33.901 --- 10.0.0.1 ping statistics --- 00:11:33.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.901 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=213016 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 213016 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 213016 ']' 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 [2024-12-13 05:27:33.290672] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:33.901 [2024-12-13 05:27:33.290713] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.901 [2024-12-13 05:27:33.367555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.901 [2024-12-13 05:27:33.390920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.901 [2024-12-13 05:27:33.390953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.901 [2024-12-13 05:27:33.390960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.901 [2024-12-13 05:27:33.390966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.901 [2024-12-13 05:27:33.390971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.901 [2024-12-13 05:27:33.392397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.901 [2024-12-13 05:27:33.392510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.901 [2024-12-13 05:27:33.392547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.901 [2024-12-13 05:27:33.392548] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 [2024-12-13 05:27:33.525531] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 Null1 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 [2024-12-13 05:27:33.594587] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 Null2 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 Null3 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.901 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.902 Null4 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:33.902 00:11:33.902 Discovery Log Number of Records 6, Generation counter 6 00:11:33.902 =====Discovery Log Entry 0====== 00:11:33.902 trtype: tcp 00:11:33.902 adrfam: ipv4 00:11:33.902 subtype: current discovery subsystem 00:11:33.902 treq: not required 00:11:33.902 portid: 0 00:11:33.902 trsvcid: 4420 00:11:33.902 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:33.902 traddr: 10.0.0.2 00:11:33.902 eflags: explicit discovery connections, duplicate discovery information 00:11:33.902 sectype: none 00:11:33.902 =====Discovery Log Entry 1====== 00:11:33.902 trtype: tcp 00:11:33.902 adrfam: ipv4 00:11:33.902 subtype: nvme subsystem 00:11:33.902 treq: not required 00:11:33.902 portid: 0 00:11:33.902 trsvcid: 4420 00:11:33.902 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:33.902 traddr: 10.0.0.2 00:11:33.902 eflags: none 00:11:33.902 sectype: none 00:11:33.902 =====Discovery Log Entry 2====== 00:11:33.902 trtype: tcp 00:11:33.902 adrfam: ipv4 00:11:33.902 subtype: nvme subsystem 00:11:33.902 treq: not required 00:11:33.902 portid: 0 00:11:33.902 trsvcid: 4420 00:11:33.902 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:33.902 traddr: 10.0.0.2 00:11:33.902 eflags: none 00:11:33.902 sectype: none 00:11:33.902 =====Discovery Log Entry 3====== 00:11:33.902 trtype: tcp 00:11:33.902 adrfam: ipv4 00:11:33.902 subtype: nvme subsystem 00:11:33.902 treq: not required 00:11:33.902 portid: 0 00:11:33.902 trsvcid: 4420 00:11:33.902 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:33.902 traddr: 10.0.0.2 00:11:33.902 eflags: none 00:11:33.902 sectype: none 00:11:33.902 =====Discovery Log Entry 4====== 00:11:33.902 trtype: tcp 00:11:33.902 adrfam: ipv4 00:11:33.902 subtype: nvme subsystem 00:11:33.902 treq: not required 00:11:33.902 portid: 0 00:11:33.902 trsvcid: 4420 00:11:33.902 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:33.902 traddr: 10.0.0.2 00:11:33.902 eflags: none 00:11:33.902 sectype: none 00:11:33.902 =====Discovery Log Entry 5====== 00:11:33.902 trtype: tcp 00:11:33.902 adrfam: ipv4 00:11:33.902 subtype: discovery subsystem referral 00:11:33.902 treq: not required 00:11:33.902 portid: 0 00:11:33.902 trsvcid: 4430 00:11:33.902 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:33.902 traddr: 10.0.0.2 00:11:33.902 eflags: none 00:11:33.902 sectype: none 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:33.902 Perform nvmf subsystem discovery via RPC 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.902 [ 00:11:33.902 { 00:11:33.902 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:33.902 "subtype": "Discovery", 00:11:33.902 "listen_addresses": [ 00:11:33.902 { 00:11:33.902 "trtype": "TCP", 00:11:33.902 "adrfam": "IPv4", 00:11:33.902 "traddr": "10.0.0.2", 00:11:33.902 "trsvcid": "4420" 00:11:33.902 } 00:11:33.902 ], 00:11:33.902 "allow_any_host": true, 00:11:33.902 "hosts": [] 00:11:33.902 }, 00:11:33.902 { 00:11:33.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:33.902 "subtype": "NVMe", 00:11:33.902 "listen_addresses": [ 00:11:33.902 { 00:11:33.902 "trtype": "TCP", 00:11:33.902 "adrfam": "IPv4", 00:11:33.902 "traddr": "10.0.0.2", 00:11:33.902 "trsvcid": "4420" 00:11:33.902 } 00:11:33.902 ], 00:11:33.902 "allow_any_host": true, 00:11:33.902 "hosts": [], 00:11:33.902 "serial_number": "SPDK00000000000001", 00:11:33.902 "model_number": "SPDK bdev Controller", 00:11:33.902 "max_namespaces": 32, 00:11:33.902 "min_cntlid": 1, 00:11:33.902 "max_cntlid": 65519, 00:11:33.902 "namespaces": [ 00:11:33.902 { 00:11:33.902 "nsid": 1, 00:11:33.902 "bdev_name": "Null1", 00:11:33.902 "name": "Null1", 00:11:33.902 "nguid": "8457D39CC10841E99A5D56AEFAE4F222", 00:11:33.902 "uuid": "8457d39c-c108-41e9-9a5d-56aefae4f222" 00:11:33.902 } 00:11:33.902 ] 00:11:33.902 }, 00:11:33.902 { 00:11:33.902 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:33.902 "subtype": "NVMe", 00:11:33.902 "listen_addresses": [ 00:11:33.902 { 00:11:33.902 "trtype": "TCP", 00:11:33.902 "adrfam": "IPv4", 00:11:33.902 "traddr": "10.0.0.2", 00:11:33.902 "trsvcid": "4420" 00:11:33.902 } 00:11:33.902 ], 00:11:33.902 "allow_any_host": true, 00:11:33.902 "hosts": [], 00:11:33.902 "serial_number": "SPDK00000000000002", 00:11:33.902 "model_number": "SPDK bdev Controller", 00:11:33.902 "max_namespaces": 32, 00:11:33.902 "min_cntlid": 1, 00:11:33.902 "max_cntlid": 65519, 00:11:33.902 "namespaces": [ 00:11:33.902 { 00:11:33.902 "nsid": 1, 00:11:33.902 "bdev_name": "Null2", 00:11:33.902 "name": "Null2", 00:11:33.902 "nguid": "8FA9C5F506014A688388F5500AD311B3", 00:11:33.902 "uuid": "8fa9c5f5-0601-4a68-8388-f5500ad311b3" 00:11:33.902 } 00:11:33.902 ] 00:11:33.902 }, 00:11:33.902 { 00:11:33.902 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:33.902 "subtype": "NVMe", 00:11:33.902 "listen_addresses": [ 00:11:33.902 { 00:11:33.902 "trtype": "TCP", 00:11:33.902 "adrfam": "IPv4", 00:11:33.902 "traddr": "10.0.0.2", 00:11:33.902 "trsvcid": "4420" 00:11:33.902 } 00:11:33.902 ], 00:11:33.902 "allow_any_host": true, 00:11:33.902 "hosts": [], 00:11:33.902 "serial_number": "SPDK00000000000003", 00:11:33.902 "model_number": "SPDK bdev Controller", 00:11:33.902 "max_namespaces": 32, 00:11:33.902 "min_cntlid": 1, 00:11:33.902 "max_cntlid": 65519, 00:11:33.902 "namespaces": [ 00:11:33.902 { 00:11:33.902 "nsid": 1, 00:11:33.902 "bdev_name": "Null3", 00:11:33.902 "name": "Null3", 00:11:33.902 "nguid": "A145511C8F8A47A9B99DA3B8A0365A35", 00:11:33.902 "uuid": "a145511c-8f8a-47a9-b99d-a3b8a0365a35" 00:11:33.902 } 00:11:33.902 ] 00:11:33.902 }, 00:11:33.902 { 00:11:33.902 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:33.902 "subtype": "NVMe", 00:11:33.902 "listen_addresses": [ 00:11:33.902 { 00:11:33.902 "trtype": "TCP", 00:11:33.902 "adrfam": "IPv4", 00:11:33.902 "traddr": "10.0.0.2", 00:11:33.902 "trsvcid": "4420" 00:11:33.902 } 00:11:33.902 ], 00:11:33.902 "allow_any_host": true, 00:11:33.902 "hosts": [], 00:11:33.902 "serial_number": "SPDK00000000000004", 00:11:33.902 "model_number": "SPDK bdev Controller", 00:11:33.902 "max_namespaces": 32, 00:11:33.902 "min_cntlid": 1, 00:11:33.902 "max_cntlid": 65519, 00:11:33.902 "namespaces": [ 00:11:33.902 { 00:11:33.902 "nsid": 1, 00:11:33.902 "bdev_name": "Null4", 00:11:33.902 "name": "Null4", 00:11:33.902 "nguid": "6135D3582A9B41848DF868F8DBE636CF", 00:11:33.902 "uuid": "6135d358-2a9b-4184-8df8-68f8dbe636cf" 00:11:33.902 } 00:11:33.902 ] 00:11:33.902 } 00:11:33.902 ] 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:33.902 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.903 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.162 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.162 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:34.162 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:34.162 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.162 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.162 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.162 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:34.162 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.162 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.162 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.162 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.163 05:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.163 rmmod nvme_tcp 00:11:34.163 rmmod nvme_fabrics 00:11:34.163 rmmod nvme_keyring 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 213016 ']' 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 213016 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 213016 ']' 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 213016 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 213016 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 213016' 00:11:34.163 killing process with pid 213016 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 213016 00:11:34.163 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 213016 00:11:34.423 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.423 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:34.423 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:34.423 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:34.423 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:34.423 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:34.423 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:34.423 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:34.423 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:34.423 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.423 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.423 05:27:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.331 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:36.331 00:11:36.331 real 0m9.222s 00:11:36.331 user 0m5.365s 00:11:36.331 sys 0m4.754s 00:11:36.331 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.331 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.331 ************************************ 00:11:36.331 END TEST nvmf_target_discovery 00:11:36.331 ************************************ 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:36.591 ************************************ 00:11:36.591 START TEST nvmf_referrals 00:11:36.591 ************************************ 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:36.591 * Looking for test storage... 00:11:36.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.591 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:36.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.591 --rc genhtml_branch_coverage=1 00:11:36.591 --rc genhtml_function_coverage=1 00:11:36.591 --rc genhtml_legend=1 00:11:36.591 --rc geninfo_all_blocks=1 00:11:36.591 --rc geninfo_unexecuted_blocks=1 00:11:36.591 00:11:36.591 ' 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:36.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.592 --rc genhtml_branch_coverage=1 00:11:36.592 --rc genhtml_function_coverage=1 00:11:36.592 --rc genhtml_legend=1 00:11:36.592 --rc geninfo_all_blocks=1 00:11:36.592 --rc geninfo_unexecuted_blocks=1 00:11:36.592 00:11:36.592 ' 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:36.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.592 --rc genhtml_branch_coverage=1 00:11:36.592 --rc genhtml_function_coverage=1 00:11:36.592 --rc genhtml_legend=1 00:11:36.592 --rc geninfo_all_blocks=1 00:11:36.592 --rc geninfo_unexecuted_blocks=1 00:11:36.592 00:11:36.592 ' 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:36.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.592 --rc genhtml_branch_coverage=1 00:11:36.592 --rc genhtml_function_coverage=1 00:11:36.592 --rc genhtml_legend=1 00:11:36.592 --rc geninfo_all_blocks=1 00:11:36.592 --rc geninfo_unexecuted_blocks=1 00:11:36.592 00:11:36.592 ' 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.592 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.852 05:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:43.426 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:43.426 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:43.426 Found net devices under 0000:af:00.0: cvl_0_0 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:43.426 Found net devices under 0000:af:00.1: cvl_0_1 00:11:43.426 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:11:43.427 00:11:43.427 --- 10.0.0.2 ping statistics --- 00:11:43.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.427 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:11:43.427 00:11:43.427 --- 10.0.0.1 ping statistics --- 00:11:43.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.427 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=216641 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 216641 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 216641 ']' 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.427 [2024-12-13 05:27:42.648835] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:43.427 [2024-12-13 05:27:42.648884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.427 [2024-12-13 05:27:42.729013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.427 [2024-12-13 05:27:42.752107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.427 [2024-12-13 05:27:42.752140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.427 [2024-12-13 05:27:42.752147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.427 [2024-12-13 05:27:42.752153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.427 [2024-12-13 05:27:42.752159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.427 [2024-12-13 05:27:42.753549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.427 [2024-12-13 05:27:42.753576] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.427 [2024-12-13 05:27:42.753607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.427 [2024-12-13 05:27:42.753608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.427 [2024-12-13 05:27:42.893943] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.427 [2024-12-13 05:27:42.921621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.427 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:43.428 05:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:43.428 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:43.687 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:43.945 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:43.945 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:43.945 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:43.945 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:43.945 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:43.945 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.945 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:43.945 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:43.945 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:43.945 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:43.945 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:43.945 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:43.945 05:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.204 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.463 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:44.463 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:44.463 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:44.463 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:44.463 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:44.463 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.463 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:44.463 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:44.463 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:44.463 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:44.463 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:44.463 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.463 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:44.721 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:44.980 rmmod nvme_tcp 00:11:44.980 rmmod nvme_fabrics 00:11:44.980 rmmod nvme_keyring 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 216641 ']' 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 216641 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 216641 ']' 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 216641 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216641 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216641' 00:11:44.980 killing process with pid 216641 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 216641 00:11:44.980 05:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 216641 00:11:45.239 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.239 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.239 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.239 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:45.239 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:45.240 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.240 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.240 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.240 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.240 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.240 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.240 05:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:47.780 00:11:47.780 real 0m10.792s 00:11:47.780 user 0m12.022s 00:11:47.780 sys 0m5.168s 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.780 ************************************ 00:11:47.780 END TEST nvmf_referrals 00:11:47.780 ************************************ 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:47.780 ************************************ 00:11:47.780 START TEST nvmf_connect_disconnect 00:11:47.780 ************************************ 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:47.780 * Looking for test storage... 00:11:47.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:47.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.780 --rc genhtml_branch_coverage=1 00:11:47.780 --rc genhtml_function_coverage=1 00:11:47.780 --rc genhtml_legend=1 00:11:47.780 --rc geninfo_all_blocks=1 00:11:47.780 --rc geninfo_unexecuted_blocks=1 00:11:47.780 00:11:47.780 ' 00:11:47.780 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:47.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.781 --rc genhtml_branch_coverage=1 00:11:47.781 --rc genhtml_function_coverage=1 00:11:47.781 --rc genhtml_legend=1 00:11:47.781 --rc geninfo_all_blocks=1 00:11:47.781 --rc geninfo_unexecuted_blocks=1 00:11:47.781 00:11:47.781 ' 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:47.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.781 --rc genhtml_branch_coverage=1 00:11:47.781 --rc genhtml_function_coverage=1 00:11:47.781 --rc genhtml_legend=1 00:11:47.781 --rc geninfo_all_blocks=1 00:11:47.781 --rc geninfo_unexecuted_blocks=1 00:11:47.781 00:11:47.781 ' 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:47.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.781 --rc genhtml_branch_coverage=1 00:11:47.781 --rc genhtml_function_coverage=1 00:11:47.781 --rc genhtml_legend=1 00:11:47.781 --rc geninfo_all_blocks=1 00:11:47.781 --rc geninfo_unexecuted_blocks=1 00:11:47.781 00:11:47.781 ' 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:47.781 05:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:54.357 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:54.357 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.357 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:54.358 Found net devices under 0000:af:00.0: cvl_0_0 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:54.358 Found net devices under 0000:af:00.1: cvl_0_1 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:54.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:11:54.358 00:11:54.358 --- 10.0.0.2 ping statistics --- 00:11:54.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.358 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:11:54.358 00:11:54.358 --- 10.0.0.1 ping statistics --- 00:11:54.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.358 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=220593 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 220593 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 220593 ']' 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.358 [2024-12-13 05:27:53.590025] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:54.358 [2024-12-13 05:27:53.590074] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.358 [2024-12-13 05:27:53.669453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.358 [2024-12-13 05:27:53.692690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.358 [2024-12-13 05:27:53.692727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.358 [2024-12-13 05:27:53.692735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.358 [2024-12-13 05:27:53.692741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.358 [2024-12-13 05:27:53.692747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.358 [2024-12-13 05:27:53.694108] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.358 [2024-12-13 05:27:53.694214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.358 [2024-12-13 05:27:53.694297] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.358 [2024-12-13 05:27:53.694299] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.358 [2024-12-13 05:27:53.834320] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.358 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.359 [2024-12-13 05:27:53.900005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:54.359 05:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:56.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.402 [2024-12-13 05:30:24.835374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb3500 is same with the state(6) to be set 00:14:25.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.756 [2024-12-13 05:30:57.303389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1560 is same with the state(6) to be set 00:14:57.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.198 [2024-12-13 05:31:01.856337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1560 is same with the state(6) to be set 00:15:02.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.460 [2024-12-13 05:31:43.165409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb1560 is same with the state(6) to be set 00:15:43.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:45.998 rmmod nvme_tcp 00:15:45.998 rmmod nvme_fabrics 00:15:45.998 rmmod nvme_keyring 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 220593 ']' 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 220593 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 220593 ']' 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 220593 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:15:45.998 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220593 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220593' 00:15:45.999 killing process with pid 220593 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 220593 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 220593 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.999 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.917 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:47.917 00:15:47.917 real 4m0.626s 00:15:47.917 user 15m19.144s 00:15:47.917 sys 0m24.655s 00:15:47.917 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.917 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:47.917 ************************************ 00:15:47.917 END TEST nvmf_connect_disconnect 00:15:47.917 ************************************ 00:15:48.177 05:31:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:48.177 05:31:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:48.177 05:31:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.177 05:31:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.177 ************************************ 00:15:48.177 START TEST nvmf_multitarget 00:15:48.177 ************************************ 00:15:48.177 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:48.178 * Looking for test storage... 00:15:48.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:48.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.178 --rc genhtml_branch_coverage=1 00:15:48.178 --rc genhtml_function_coverage=1 00:15:48.178 --rc genhtml_legend=1 00:15:48.178 --rc geninfo_all_blocks=1 00:15:48.178 --rc geninfo_unexecuted_blocks=1 00:15:48.178 00:15:48.178 ' 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:48.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.178 --rc genhtml_branch_coverage=1 00:15:48.178 --rc genhtml_function_coverage=1 00:15:48.178 --rc genhtml_legend=1 00:15:48.178 --rc geninfo_all_blocks=1 00:15:48.178 --rc geninfo_unexecuted_blocks=1 00:15:48.178 00:15:48.178 ' 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:48.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.178 --rc genhtml_branch_coverage=1 00:15:48.178 --rc genhtml_function_coverage=1 00:15:48.178 --rc genhtml_legend=1 00:15:48.178 --rc geninfo_all_blocks=1 00:15:48.178 --rc geninfo_unexecuted_blocks=1 00:15:48.178 00:15:48.178 ' 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:48.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.178 --rc genhtml_branch_coverage=1 00:15:48.178 --rc genhtml_function_coverage=1 00:15:48.178 --rc genhtml_legend=1 00:15:48.178 --rc geninfo_all_blocks=1 00:15:48.178 --rc geninfo_unexecuted_blocks=1 00:15:48.178 00:15:48.178 ' 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.178 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:48.439 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:55.014 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:55.014 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.014 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:55.015 Found net devices under 0000:af:00.0: cvl_0_0 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:55.015 Found net devices under 0000:af:00.1: cvl_0_1 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:55.015 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:55.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:15:55.015 00:15:55.015 --- 10.0.0.2 ping statistics --- 00:15:55.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.015 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:15:55.015 00:15:55.015 --- 10.0.0.1 ping statistics --- 00:15:55.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.015 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=263572 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 263572 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 263572 ']' 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:55.015 [2024-12-13 05:31:54.161620] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:55.015 [2024-12-13 05:31:54.161664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.015 [2024-12-13 05:31:54.238217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:55.015 [2024-12-13 05:31:54.260377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.015 [2024-12-13 05:31:54.260415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.015 [2024-12-13 05:31:54.260423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.015 [2024-12-13 05:31:54.260428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.015 [2024-12-13 05:31:54.260434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.015 [2024-12-13 05:31:54.261721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.015 [2024-12-13 05:31:54.261828] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.015 [2024-12-13 05:31:54.261937] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.015 [2024-12-13 05:31:54.261938] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:55.015 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:55.016 "nvmf_tgt_1" 00:15:55.016 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:55.016 "nvmf_tgt_2" 00:15:55.016 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:55.016 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:55.016 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:55.016 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:55.016 true 00:15:55.016 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:55.016 true 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:55.276 rmmod nvme_tcp 00:15:55.276 rmmod nvme_fabrics 00:15:55.276 rmmod nvme_keyring 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 263572 ']' 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 263572 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 263572 ']' 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 263572 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263572 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263572' 00:15:55.276 killing process with pid 263572 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 263572 00:15:55.276 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 263572 00:15:55.538 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:55.538 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:55.538 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:55.538 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:55.538 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:15:55.538 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:55.538 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:15:55.538 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:55.538 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:55.538 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.538 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.538 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:58.078 00:15:58.078 real 0m9.519s 00:15:58.078 user 0m7.124s 00:15:58.078 sys 0m4.838s 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:58.078 ************************************ 00:15:58.078 END TEST nvmf_multitarget 00:15:58.078 ************************************ 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:58.078 ************************************ 00:15:58.078 START TEST nvmf_rpc 00:15:58.078 ************************************ 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:58.078 * Looking for test storage... 00:15:58.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.078 --rc genhtml_branch_coverage=1 00:15:58.078 --rc genhtml_function_coverage=1 00:15:58.078 --rc genhtml_legend=1 00:15:58.078 --rc geninfo_all_blocks=1 00:15:58.078 --rc geninfo_unexecuted_blocks=1 00:15:58.078 00:15:58.078 ' 00:15:58.078 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.078 --rc genhtml_branch_coverage=1 00:15:58.079 --rc genhtml_function_coverage=1 00:15:58.079 --rc genhtml_legend=1 00:15:58.079 --rc geninfo_all_blocks=1 00:15:58.079 --rc geninfo_unexecuted_blocks=1 00:15:58.079 00:15:58.079 ' 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:58.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.079 --rc genhtml_branch_coverage=1 00:15:58.079 --rc genhtml_function_coverage=1 00:15:58.079 --rc genhtml_legend=1 00:15:58.079 --rc geninfo_all_blocks=1 00:15:58.079 --rc geninfo_unexecuted_blocks=1 00:15:58.079 00:15:58.079 ' 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:58.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.079 --rc genhtml_branch_coverage=1 00:15:58.079 --rc genhtml_function_coverage=1 00:15:58.079 --rc genhtml_legend=1 00:15:58.079 --rc geninfo_all_blocks=1 00:15:58.079 --rc geninfo_unexecuted_blocks=1 00:15:58.079 00:15:58.079 ' 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:58.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:15:58.079 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.355 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:03.355 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:03.356 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:03.356 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:03.356 Found net devices under 0000:af:00.0: cvl_0_0 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:03.356 Found net devices under 0000:af:00.1: cvl_0_1 00:16:03.356 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:03.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:16:03.615 00:16:03.615 --- 10.0.0.2 ping statistics --- 00:16:03.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.615 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:16:03.615 00:16:03.615 --- 10.0.0.1 ping statistics --- 00:16:03.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.615 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.615 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:03.616 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:03.616 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.616 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:03.616 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:03.616 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.616 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:03.616 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:03.874 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:03.874 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:03.874 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.874 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.874 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=267232 00:16:03.874 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.874 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 267232 00:16:03.874 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 267232 ']' 00:16:03.874 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.874 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.874 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.874 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.874 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.874 [2024-12-13 05:32:03.694584] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:03.874 [2024-12-13 05:32:03.694628] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.874 [2024-12-13 05:32:03.770034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.874 [2024-12-13 05:32:03.793117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.874 [2024-12-13 05:32:03.793153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.874 [2024-12-13 05:32:03.793161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.874 [2024-12-13 05:32:03.793168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.874 [2024-12-13 05:32:03.793173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.874 [2024-12-13 05:32:03.794497] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.874 [2024-12-13 05:32:03.794604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.874 [2024-12-13 05:32:03.794634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.875 [2024-12-13 05:32:03.794635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:04.133 "tick_rate": 2100000000, 00:16:04.133 "poll_groups": [ 00:16:04.133 { 00:16:04.133 "name": "nvmf_tgt_poll_group_000", 00:16:04.133 "admin_qpairs": 0, 00:16:04.133 "io_qpairs": 0, 00:16:04.133 "current_admin_qpairs": 0, 00:16:04.133 "current_io_qpairs": 0, 00:16:04.133 "pending_bdev_io": 0, 00:16:04.133 "completed_nvme_io": 0, 00:16:04.133 "transports": [] 00:16:04.133 }, 00:16:04.133 { 00:16:04.133 "name": "nvmf_tgt_poll_group_001", 00:16:04.133 "admin_qpairs": 0, 00:16:04.133 "io_qpairs": 0, 00:16:04.133 "current_admin_qpairs": 0, 00:16:04.133 "current_io_qpairs": 0, 00:16:04.133 "pending_bdev_io": 0, 00:16:04.133 "completed_nvme_io": 0, 00:16:04.133 "transports": [] 00:16:04.133 }, 00:16:04.133 { 00:16:04.133 "name": "nvmf_tgt_poll_group_002", 00:16:04.133 "admin_qpairs": 0, 00:16:04.133 "io_qpairs": 0, 00:16:04.133 "current_admin_qpairs": 0, 00:16:04.133 "current_io_qpairs": 0, 00:16:04.133 "pending_bdev_io": 0, 00:16:04.133 "completed_nvme_io": 0, 00:16:04.133 "transports": [] 00:16:04.133 }, 00:16:04.133 { 00:16:04.133 "name": "nvmf_tgt_poll_group_003", 00:16:04.133 "admin_qpairs": 0, 00:16:04.133 "io_qpairs": 0, 00:16:04.133 "current_admin_qpairs": 0, 00:16:04.133 "current_io_qpairs": 0, 00:16:04.133 "pending_bdev_io": 0, 00:16:04.133 "completed_nvme_io": 0, 00:16:04.133 "transports": [] 00:16:04.133 } 00:16:04.133 ] 00:16:04.133 }' 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:04.133 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:04.133 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:04.133 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:04.133 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.133 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.133 [2024-12-13 05:32:04.047350] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.133 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.133 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:04.133 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.133 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.134 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.134 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:04.134 "tick_rate": 2100000000, 00:16:04.134 "poll_groups": [ 00:16:04.134 { 00:16:04.134 "name": "nvmf_tgt_poll_group_000", 00:16:04.134 "admin_qpairs": 0, 00:16:04.134 "io_qpairs": 0, 00:16:04.134 "current_admin_qpairs": 0, 00:16:04.134 "current_io_qpairs": 0, 00:16:04.134 "pending_bdev_io": 0, 00:16:04.134 "completed_nvme_io": 0, 00:16:04.134 "transports": [ 00:16:04.134 { 00:16:04.134 "trtype": "TCP" 00:16:04.134 } 00:16:04.134 ] 00:16:04.134 }, 00:16:04.134 { 00:16:04.134 "name": "nvmf_tgt_poll_group_001", 00:16:04.134 "admin_qpairs": 0, 00:16:04.134 "io_qpairs": 0, 00:16:04.134 "current_admin_qpairs": 0, 00:16:04.134 "current_io_qpairs": 0, 00:16:04.134 "pending_bdev_io": 0, 00:16:04.134 "completed_nvme_io": 0, 00:16:04.134 "transports": [ 00:16:04.134 { 00:16:04.134 "trtype": "TCP" 00:16:04.134 } 00:16:04.134 ] 00:16:04.134 }, 00:16:04.134 { 00:16:04.134 "name": "nvmf_tgt_poll_group_002", 00:16:04.134 "admin_qpairs": 0, 00:16:04.134 "io_qpairs": 0, 00:16:04.134 "current_admin_qpairs": 0, 00:16:04.134 "current_io_qpairs": 0, 00:16:04.134 "pending_bdev_io": 0, 00:16:04.134 "completed_nvme_io": 0, 00:16:04.134 "transports": [ 00:16:04.134 { 00:16:04.134 "trtype": "TCP" 00:16:04.134 } 00:16:04.134 ] 00:16:04.134 }, 00:16:04.134 { 00:16:04.134 "name": "nvmf_tgt_poll_group_003", 00:16:04.134 "admin_qpairs": 0, 00:16:04.134 "io_qpairs": 0, 00:16:04.134 "current_admin_qpairs": 0, 00:16:04.134 "current_io_qpairs": 0, 00:16:04.134 "pending_bdev_io": 0, 00:16:04.134 "completed_nvme_io": 0, 00:16:04.134 "transports": [ 00:16:04.134 { 00:16:04.134 "trtype": "TCP" 00:16:04.134 } 00:16:04.134 ] 00:16:04.134 } 00:16:04.134 ] 00:16:04.134 }' 00:16:04.134 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:04.134 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:04.134 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:04.134 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:04.134 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:04.134 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:04.134 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:04.134 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:04.134 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.393 Malloc1 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.393 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.394 [2024-12-13 05:32:04.216325] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:04.394 [2024-12-13 05:32:04.244935] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:04.394 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:04.394 could not add new controller: failed to write to nvme-fabrics device 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.394 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:05.771 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:05.771 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:05.771 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.771 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:05.771 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:07.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:07.678 [2024-12-13 05:32:07.577693] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:07.678 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:07.678 could not add new controller: failed to write to nvme-fabrics device 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.678 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:09.058 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:09.058 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:09.058 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.058 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:09.058 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:10.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.964 [2024-12-13 05:32:10.888295] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.964 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:12.341 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:12.341 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:12.341 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.341 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:12.341 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.248 [2024-12-13 05:32:14.199058] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.248 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.249 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:14.249 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.249 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.249 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.249 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:15.630 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:15.630 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:15.630 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.630 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:15.630 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:17.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.537 [2024-12-13 05:32:17.463676] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.537 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:18.951 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:18.951 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:18.951 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.951 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:18.951 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:20.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.856 [2024-12-13 05:32:20.808535] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.856 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:22.236 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:22.236 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:22.236 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.236 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:22.237 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:24.143 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:24.143 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:24.143 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:24.143 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:24.143 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:24.143 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:24.143 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.143 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:24.143 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:24.143 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:24.143 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.144 [2024-12-13 05:32:24.124607] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.144 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.522 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:25.522 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:25.522 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:25.522 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:25.522 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.429 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.429 [2024-12-13 05:32:27.440850] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 [2024-12-13 05:32:27.492982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 [2024-12-13 05:32:27.541101] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 [2024-12-13 05:32:27.589283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 [2024-12-13 05:32:27.641468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.689 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.690 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.690 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.690 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.690 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:27.690 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.690 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.690 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.690 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:27.690 "tick_rate": 2100000000, 00:16:27.690 "poll_groups": [ 00:16:27.690 { 00:16:27.690 "name": "nvmf_tgt_poll_group_000", 00:16:27.690 "admin_qpairs": 2, 00:16:27.690 "io_qpairs": 168, 00:16:27.690 "current_admin_qpairs": 0, 00:16:27.690 "current_io_qpairs": 0, 00:16:27.690 "pending_bdev_io": 0, 00:16:27.690 "completed_nvme_io": 209, 00:16:27.690 "transports": [ 00:16:27.690 { 00:16:27.690 "trtype": "TCP" 00:16:27.690 } 00:16:27.690 ] 00:16:27.690 }, 00:16:27.690 { 00:16:27.690 "name": "nvmf_tgt_poll_group_001", 00:16:27.690 "admin_qpairs": 2, 00:16:27.690 "io_qpairs": 168, 00:16:27.690 "current_admin_qpairs": 0, 00:16:27.690 "current_io_qpairs": 0, 00:16:27.690 "pending_bdev_io": 0, 00:16:27.690 "completed_nvme_io": 304, 00:16:27.690 "transports": [ 00:16:27.690 { 00:16:27.690 "trtype": "TCP" 00:16:27.690 } 00:16:27.690 ] 00:16:27.690 }, 00:16:27.690 { 00:16:27.690 "name": "nvmf_tgt_poll_group_002", 00:16:27.690 "admin_qpairs": 1, 00:16:27.690 "io_qpairs": 168, 00:16:27.690 "current_admin_qpairs": 0, 00:16:27.690 "current_io_qpairs": 0, 00:16:27.690 "pending_bdev_io": 0, 00:16:27.690 "completed_nvme_io": 193, 00:16:27.690 "transports": [ 00:16:27.690 { 00:16:27.690 "trtype": "TCP" 00:16:27.690 } 00:16:27.690 ] 00:16:27.690 }, 00:16:27.690 { 00:16:27.690 "name": "nvmf_tgt_poll_group_003", 00:16:27.690 "admin_qpairs": 2, 00:16:27.690 "io_qpairs": 168, 00:16:27.690 "current_admin_qpairs": 0, 00:16:27.690 "current_io_qpairs": 0, 00:16:27.690 "pending_bdev_io": 0, 00:16:27.690 "completed_nvme_io": 316, 00:16:27.690 "transports": [ 00:16:27.690 { 00:16:27.690 "trtype": "TCP" 00:16:27.690 } 00:16:27.690 ] 00:16:27.690 } 00:16:27.690 ] 00:16:27.690 }' 00:16:27.690 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:27.690 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:27.690 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:27.690 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:27.949 rmmod nvme_tcp 00:16:27.949 rmmod nvme_fabrics 00:16:27.949 rmmod nvme_keyring 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 267232 ']' 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 267232 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 267232 ']' 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 267232 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 267232 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 267232' 00:16:27.949 killing process with pid 267232 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 267232 00:16:27.949 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 267232 00:16:28.209 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:28.209 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:28.209 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:28.209 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:28.209 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:28.209 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:28.209 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:28.209 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:28.209 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:28.209 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.209 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.209 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:30.747 00:16:30.747 real 0m32.585s 00:16:30.747 user 1m38.490s 00:16:30.747 sys 0m6.425s 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.747 ************************************ 00:16:30.747 END TEST nvmf_rpc 00:16:30.747 ************************************ 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.747 ************************************ 00:16:30.747 START TEST nvmf_invalid 00:16:30.747 ************************************ 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:30.747 * Looking for test storage... 00:16:30.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:30.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.747 --rc genhtml_branch_coverage=1 00:16:30.747 --rc genhtml_function_coverage=1 00:16:30.747 --rc genhtml_legend=1 00:16:30.747 --rc geninfo_all_blocks=1 00:16:30.747 --rc geninfo_unexecuted_blocks=1 00:16:30.747 00:16:30.747 ' 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:30.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.747 --rc genhtml_branch_coverage=1 00:16:30.747 --rc genhtml_function_coverage=1 00:16:30.747 --rc genhtml_legend=1 00:16:30.747 --rc geninfo_all_blocks=1 00:16:30.747 --rc geninfo_unexecuted_blocks=1 00:16:30.747 00:16:30.747 ' 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:30.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.747 --rc genhtml_branch_coverage=1 00:16:30.747 --rc genhtml_function_coverage=1 00:16:30.747 --rc genhtml_legend=1 00:16:30.747 --rc geninfo_all_blocks=1 00:16:30.747 --rc geninfo_unexecuted_blocks=1 00:16:30.747 00:16:30.747 ' 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:30.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.747 --rc genhtml_branch_coverage=1 00:16:30.747 --rc genhtml_function_coverage=1 00:16:30.747 --rc genhtml_legend=1 00:16:30.747 --rc geninfo_all_blocks=1 00:16:30.747 --rc geninfo_unexecuted_blocks=1 00:16:30.747 00:16:30.747 ' 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.747 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:30.748 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:36.027 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.027 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:36.028 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:36.028 Found net devices under 0000:af:00.0: cvl_0_0 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:36.028 Found net devices under 0000:af:00.1: cvl_0_1 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.028 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:16:36.290 00:16:36.290 --- 10.0.0.2 ping statistics --- 00:16:36.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.290 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:16:36.290 00:16:36.290 --- 10.0.0.1 ping statistics --- 00:16:36.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.290 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:36.290 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:36.550 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:36.550 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.550 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.550 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:36.550 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=274736 00:16:36.550 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:36.550 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 274736 00:16:36.550 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 274736 ']' 00:16:36.550 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.550 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.550 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.550 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.550 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:36.550 [2024-12-13 05:32:36.367500] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:36.550 [2024-12-13 05:32:36.367547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.550 [2024-12-13 05:32:36.444158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.550 [2024-12-13 05:32:36.466221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.550 [2024-12-13 05:32:36.466262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.550 [2024-12-13 05:32:36.466272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.550 [2024-12-13 05:32:36.466279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.550 [2024-12-13 05:32:36.466284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.550 [2024-12-13 05:32:36.467742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.551 [2024-12-13 05:32:36.467853] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.551 [2024-12-13 05:32:36.467961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.551 [2024-12-13 05:32:36.467963] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.810 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.810 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:36.810 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:36.810 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:36.810 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:36.810 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.810 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:36.810 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6225 00:16:36.810 [2024-12-13 05:32:36.772235] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:36.810 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:36.810 { 00:16:36.810 "nqn": "nqn.2016-06.io.spdk:cnode6225", 00:16:36.810 "tgt_name": "foobar", 00:16:36.810 "method": "nvmf_create_subsystem", 00:16:36.810 "req_id": 1 00:16:36.810 } 00:16:36.810 Got JSON-RPC error response 00:16:36.810 response: 00:16:36.810 { 00:16:36.810 "code": -32603, 00:16:36.810 "message": "Unable to find target foobar" 00:16:36.810 }' 00:16:36.810 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:36.810 { 00:16:36.810 "nqn": "nqn.2016-06.io.spdk:cnode6225", 00:16:36.810 "tgt_name": "foobar", 00:16:36.810 "method": "nvmf_create_subsystem", 00:16:36.810 "req_id": 1 00:16:36.810 } 00:16:36.810 Got JSON-RPC error response 00:16:36.810 response: 00:16:36.810 { 00:16:36.810 "code": -32603, 00:16:36.810 "message": "Unable to find target foobar" 00:16:36.810 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:36.810 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:36.810 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13254 00:16:37.069 [2024-12-13 05:32:36.988967] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13254: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:37.069 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:37.069 { 00:16:37.069 "nqn": "nqn.2016-06.io.spdk:cnode13254", 00:16:37.069 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:37.069 "method": "nvmf_create_subsystem", 00:16:37.069 "req_id": 1 00:16:37.069 } 00:16:37.069 Got JSON-RPC error response 00:16:37.069 response: 00:16:37.069 { 00:16:37.069 "code": -32602, 00:16:37.069 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:37.069 }' 00:16:37.069 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:37.069 { 00:16:37.069 "nqn": "nqn.2016-06.io.spdk:cnode13254", 00:16:37.069 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:37.069 "method": "nvmf_create_subsystem", 00:16:37.069 "req_id": 1 00:16:37.069 } 00:16:37.069 Got JSON-RPC error response 00:16:37.069 response: 00:16:37.069 { 00:16:37.069 "code": -32602, 00:16:37.069 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:37.069 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:37.069 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:37.069 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25027 00:16:37.329 [2024-12-13 05:32:37.201660] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25027: invalid model number 'SPDK_Controller' 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:37.329 { 00:16:37.329 "nqn": "nqn.2016-06.io.spdk:cnode25027", 00:16:37.329 "model_number": "SPDK_Controller\u001f", 00:16:37.329 "method": "nvmf_create_subsystem", 00:16:37.329 "req_id": 1 00:16:37.329 } 00:16:37.329 Got JSON-RPC error response 00:16:37.329 response: 00:16:37.329 { 00:16:37.329 "code": -32602, 00:16:37.329 "message": "Invalid MN SPDK_Controller\u001f" 00:16:37.329 }' 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:37.329 { 00:16:37.329 "nqn": "nqn.2016-06.io.spdk:cnode25027", 00:16:37.329 "model_number": "SPDK_Controller\u001f", 00:16:37.329 "method": "nvmf_create_subsystem", 00:16:37.329 "req_id": 1 00:16:37.329 } 00:16:37.329 Got JSON-RPC error response 00:16:37.329 response: 00:16:37.329 { 00:16:37.329 "code": -32602, 00:16:37.329 "message": "Invalid MN SPDK_Controller\u001f" 00:16:37.329 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.329 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:37.330 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ G == \- ]] 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'G:B?>mEL'\''Tp\y>H"'\''{/G$' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'G:B?>mEL'\''Tp\y>H"'\''{/G$' nqn.2016-06.io.spdk:cnode12876 00:16:37.590 [2024-12-13 05:32:37.538802] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12876: invalid serial number 'G:B?>mEL'Tp\y>H"'{/G$' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:37.590 { 00:16:37.590 "nqn": "nqn.2016-06.io.spdk:cnode12876", 00:16:37.590 "serial_number": "G:B?>mEL'\''Tp\\y>H\"'\''{/G$", 00:16:37.590 "method": "nvmf_create_subsystem", 00:16:37.590 "req_id": 1 00:16:37.590 } 00:16:37.590 Got JSON-RPC error response 00:16:37.590 response: 00:16:37.590 { 00:16:37.590 "code": -32602, 00:16:37.590 "message": "Invalid SN G:B?>mEL'\''Tp\\y>H\"'\''{/G$" 00:16:37.590 }' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:37.590 { 00:16:37.590 "nqn": "nqn.2016-06.io.spdk:cnode12876", 00:16:37.590 "serial_number": "G:B?>mEL'Tp\\y>H\"'{/G$", 00:16:37.590 "method": "nvmf_create_subsystem", 00:16:37.590 "req_id": 1 00:16:37.590 } 00:16:37.590 Got JSON-RPC error response 00:16:37.590 response: 00:16:37.590 { 00:16:37.590 "code": -32602, 00:16:37.590 "message": "Invalid SN G:B?>mEL'Tp\\y>H\"'{/G$" 00:16:37.590 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.590 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:37.851 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:16:37.852 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'UBx,+S"N."6mV4f&R|I7zQV,hs>2T60&rg2T60&rg2T60&rg2T60&rg2T60&rg2T60&rg2T60&rg /dev/null' 00:16:40.189 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.097 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:42.097 00:16:42.097 real 0m11.863s 00:16:42.097 user 0m18.479s 00:16:42.097 sys 0m5.218s 00:16:42.097 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:42.097 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:42.097 ************************************ 00:16:42.097 END TEST nvmf_invalid 00:16:42.097 ************************************ 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:42.357 ************************************ 00:16:42.357 START TEST nvmf_connect_stress 00:16:42.357 ************************************ 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:42.357 * Looking for test storage... 00:16:42.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:42.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.357 --rc genhtml_branch_coverage=1 00:16:42.357 --rc genhtml_function_coverage=1 00:16:42.357 --rc genhtml_legend=1 00:16:42.357 --rc geninfo_all_blocks=1 00:16:42.357 --rc geninfo_unexecuted_blocks=1 00:16:42.357 00:16:42.357 ' 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:42.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.357 --rc genhtml_branch_coverage=1 00:16:42.357 --rc genhtml_function_coverage=1 00:16:42.357 --rc genhtml_legend=1 00:16:42.357 --rc geninfo_all_blocks=1 00:16:42.357 --rc geninfo_unexecuted_blocks=1 00:16:42.357 00:16:42.357 ' 00:16:42.357 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:42.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.357 --rc genhtml_branch_coverage=1 00:16:42.357 --rc genhtml_function_coverage=1 00:16:42.357 --rc genhtml_legend=1 00:16:42.358 --rc geninfo_all_blocks=1 00:16:42.358 --rc geninfo_unexecuted_blocks=1 00:16:42.358 00:16:42.358 ' 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:42.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.358 --rc genhtml_branch_coverage=1 00:16:42.358 --rc genhtml_function_coverage=1 00:16:42.358 --rc genhtml_legend=1 00:16:42.358 --rc geninfo_all_blocks=1 00:16:42.358 --rc geninfo_unexecuted_blocks=1 00:16:42.358 00:16:42.358 ' 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.358 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:42.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:42.618 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:49.195 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:49.195 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:49.195 Found net devices under 0000:af:00.0: cvl_0_0 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:49.195 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:49.196 Found net devices under 0000:af:00.1: cvl_0_1 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:49.196 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:49.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:16:49.196 00:16:49.196 --- 10.0.0.2 ping statistics --- 00:16:49.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.196 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:16:49.196 00:16:49.196 --- 10.0.0.1 ping statistics --- 00:16:49.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.196 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=279036 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 279036 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 279036 ']' 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.196 [2024-12-13 05:32:48.345763] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:49.196 [2024-12-13 05:32:48.345810] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.196 [2024-12-13 05:32:48.427411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:49.196 [2024-12-13 05:32:48.449413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.196 [2024-12-13 05:32:48.449451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.196 [2024-12-13 05:32:48.449458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.196 [2024-12-13 05:32:48.449464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.196 [2024-12-13 05:32:48.449469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.196 [2024-12-13 05:32:48.450703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.196 [2024-12-13 05:32:48.450810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.196 [2024-12-13 05:32:48.450811] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.196 [2024-12-13 05:32:48.582302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.196 [2024-12-13 05:32:48.606537] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.196 NULL1 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=279082 00:16:49.196 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.197 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.197 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.197 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:49.197 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.197 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.197 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.456 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.456 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:49.456 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.456 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.456 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.716 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.716 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:49.716 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.716 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.716 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.284 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.284 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:50.284 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.284 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.284 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.543 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.543 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:50.543 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.543 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.543 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.802 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.802 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:50.802 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.802 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.802 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.062 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.062 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:51.062 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.062 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.062 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.321 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.321 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:51.321 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.321 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.321 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.889 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.889 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:51.889 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.889 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.889 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.149 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.149 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:52.149 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.149 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.149 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.408 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.408 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:52.408 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.408 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.408 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.667 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.667 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:52.667 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.667 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.667 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.927 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.927 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:52.927 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.186 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.186 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.445 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.445 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:53.445 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.445 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.445 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.706 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.706 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:53.706 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.706 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.706 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.967 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.967 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:53.967 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.967 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.967 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.534 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.534 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:54.534 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.534 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.535 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.794 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.794 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:54.794 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.794 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.794 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.053 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.053 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:55.053 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.053 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.053 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.312 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.312 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:55.312 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.312 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.312 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.591 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.591 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:55.591 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.591 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.591 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.159 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.159 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:56.159 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.159 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.159 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.418 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.418 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:56.418 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.418 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.418 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.677 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.677 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:56.677 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.677 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.677 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.936 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.936 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:56.936 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.936 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.936 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.195 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.195 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:57.195 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.195 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.195 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.763 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.763 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:57.763 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.763 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.763 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.023 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.023 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:58.023 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.023 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.023 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.282 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.282 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:58.282 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.282 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.282 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.541 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.541 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:58.541 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.541 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.541 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.800 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:58.800 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.800 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279082 00:16:58.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (279082) - No such process 00:16:58.800 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 279082 00:16:58.800 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:58.800 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:58.800 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:58.800 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:58.800 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:59.060 rmmod nvme_tcp 00:16:59.060 rmmod nvme_fabrics 00:16:59.060 rmmod nvme_keyring 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 279036 ']' 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 279036 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 279036 ']' 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 279036 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279036 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279036' 00:16:59.060 killing process with pid 279036 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 279036 00:16:59.060 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 279036 00:16:59.320 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:59.320 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:59.320 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:59.320 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:59.320 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:59.320 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:59.320 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:59.320 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:59.320 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:59.320 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.320 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.320 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.226 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:01.226 00:17:01.226 real 0m18.982s 00:17:01.226 user 0m41.270s 00:17:01.226 sys 0m6.659s 00:17:01.226 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.226 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.226 ************************************ 00:17:01.226 END TEST nvmf_connect_stress 00:17:01.226 ************************************ 00:17:01.226 05:33:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:01.226 05:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:01.226 05:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.226 05:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:01.226 ************************************ 00:17:01.226 START TEST nvmf_fused_ordering 00:17:01.226 ************************************ 00:17:01.226 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:01.486 * Looking for test storage... 00:17:01.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:01.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.486 --rc genhtml_branch_coverage=1 00:17:01.486 --rc genhtml_function_coverage=1 00:17:01.486 --rc genhtml_legend=1 00:17:01.486 --rc geninfo_all_blocks=1 00:17:01.486 --rc geninfo_unexecuted_blocks=1 00:17:01.486 00:17:01.486 ' 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:01.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.486 --rc genhtml_branch_coverage=1 00:17:01.486 --rc genhtml_function_coverage=1 00:17:01.486 --rc genhtml_legend=1 00:17:01.486 --rc geninfo_all_blocks=1 00:17:01.486 --rc geninfo_unexecuted_blocks=1 00:17:01.486 00:17:01.486 ' 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:01.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.486 --rc genhtml_branch_coverage=1 00:17:01.486 --rc genhtml_function_coverage=1 00:17:01.486 --rc genhtml_legend=1 00:17:01.486 --rc geninfo_all_blocks=1 00:17:01.486 --rc geninfo_unexecuted_blocks=1 00:17:01.486 00:17:01.486 ' 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:01.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.486 --rc genhtml_branch_coverage=1 00:17:01.486 --rc genhtml_function_coverage=1 00:17:01.486 --rc genhtml_legend=1 00:17:01.486 --rc geninfo_all_blocks=1 00:17:01.486 --rc geninfo_unexecuted_blocks=1 00:17:01.486 00:17:01.486 ' 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:01.486 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.487 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.487 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.487 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:01.487 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:01.487 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:01.487 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.063 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:08.064 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:08.064 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:08.064 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:08.064 Found net devices under 0000:af:00.0: cvl_0_0 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:08.064 Found net devices under 0000:af:00.1: cvl_0_1 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:08.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:17:08.064 00:17:08.064 --- 10.0.0.2 ping statistics --- 00:17:08.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.064 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:17:08.064 00:17:08.064 --- 10.0.0.1 ping statistics --- 00:17:08.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.064 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:08.064 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=284260 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 284260 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 284260 ']' 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.065 [2024-12-13 05:33:07.331282] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:08.065 [2024-12-13 05:33:07.331329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.065 [2024-12-13 05:33:07.409443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.065 [2024-12-13 05:33:07.430700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.065 [2024-12-13 05:33:07.430735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.065 [2024-12-13 05:33:07.430742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.065 [2024-12-13 05:33:07.430748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.065 [2024-12-13 05:33:07.430753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.065 [2024-12-13 05:33:07.431227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.065 [2024-12-13 05:33:07.573363] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.065 [2024-12-13 05:33:07.593558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.065 NULL1 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.065 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:08.065 [2024-12-13 05:33:07.651679] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:08.065 [2024-12-13 05:33:07.651723] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284382 ] 00:17:08.065 Attached to nqn.2016-06.io.spdk:cnode1 00:17:08.065 Namespace ID: 1 size: 1GB 00:17:08.065 fused_ordering(0) 00:17:08.065 fused_ordering(1) 00:17:08.065 fused_ordering(2) 00:17:08.065 fused_ordering(3) 00:17:08.065 fused_ordering(4) 00:17:08.065 fused_ordering(5) 00:17:08.065 fused_ordering(6) 00:17:08.065 fused_ordering(7) 00:17:08.065 fused_ordering(8) 00:17:08.065 fused_ordering(9) 00:17:08.065 fused_ordering(10) 00:17:08.065 fused_ordering(11) 00:17:08.065 fused_ordering(12) 00:17:08.065 fused_ordering(13) 00:17:08.065 fused_ordering(14) 00:17:08.065 fused_ordering(15) 00:17:08.065 fused_ordering(16) 00:17:08.065 fused_ordering(17) 00:17:08.065 fused_ordering(18) 00:17:08.065 fused_ordering(19) 00:17:08.065 fused_ordering(20) 00:17:08.065 fused_ordering(21) 00:17:08.065 fused_ordering(22) 00:17:08.065 fused_ordering(23) 00:17:08.065 fused_ordering(24) 00:17:08.065 fused_ordering(25) 00:17:08.065 fused_ordering(26) 00:17:08.065 fused_ordering(27) 00:17:08.065 fused_ordering(28) 00:17:08.065 fused_ordering(29) 00:17:08.065 fused_ordering(30) 00:17:08.065 fused_ordering(31) 00:17:08.065 fused_ordering(32) 00:17:08.065 fused_ordering(33) 00:17:08.065 fused_ordering(34) 00:17:08.065 fused_ordering(35) 00:17:08.065 fused_ordering(36) 00:17:08.065 fused_ordering(37) 00:17:08.065 fused_ordering(38) 00:17:08.065 fused_ordering(39) 00:17:08.065 fused_ordering(40) 00:17:08.065 fused_ordering(41) 00:17:08.065 fused_ordering(42) 00:17:08.065 fused_ordering(43) 00:17:08.065 fused_ordering(44) 00:17:08.065 fused_ordering(45) 00:17:08.065 fused_ordering(46) 00:17:08.065 fused_ordering(47) 00:17:08.065 fused_ordering(48) 00:17:08.065 fused_ordering(49) 00:17:08.065 fused_ordering(50) 00:17:08.065 fused_ordering(51) 00:17:08.065 fused_ordering(52) 00:17:08.065 fused_ordering(53) 00:17:08.065 fused_ordering(54) 00:17:08.065 fused_ordering(55) 00:17:08.065 fused_ordering(56) 00:17:08.065 fused_ordering(57) 00:17:08.065 fused_ordering(58) 00:17:08.065 fused_ordering(59) 00:17:08.065 fused_ordering(60) 00:17:08.065 fused_ordering(61) 00:17:08.065 fused_ordering(62) 00:17:08.065 fused_ordering(63) 00:17:08.065 fused_ordering(64) 00:17:08.065 fused_ordering(65) 00:17:08.065 fused_ordering(66) 00:17:08.065 fused_ordering(67) 00:17:08.065 fused_ordering(68) 00:17:08.065 fused_ordering(69) 00:17:08.065 fused_ordering(70) 00:17:08.065 fused_ordering(71) 00:17:08.065 fused_ordering(72) 00:17:08.065 fused_ordering(73) 00:17:08.065 fused_ordering(74) 00:17:08.065 fused_ordering(75) 00:17:08.065 fused_ordering(76) 00:17:08.065 fused_ordering(77) 00:17:08.065 fused_ordering(78) 00:17:08.065 fused_ordering(79) 00:17:08.065 fused_ordering(80) 00:17:08.065 fused_ordering(81) 00:17:08.065 fused_ordering(82) 00:17:08.065 fused_ordering(83) 00:17:08.065 fused_ordering(84) 00:17:08.065 fused_ordering(85) 00:17:08.065 fused_ordering(86) 00:17:08.065 fused_ordering(87) 00:17:08.065 fused_ordering(88) 00:17:08.065 fused_ordering(89) 00:17:08.065 fused_ordering(90) 00:17:08.065 fused_ordering(91) 00:17:08.065 fused_ordering(92) 00:17:08.065 fused_ordering(93) 00:17:08.065 fused_ordering(94) 00:17:08.065 fused_ordering(95) 00:17:08.065 fused_ordering(96) 00:17:08.065 fused_ordering(97) 00:17:08.065 fused_ordering(98) 00:17:08.065 fused_ordering(99) 00:17:08.065 fused_ordering(100) 00:17:08.065 fused_ordering(101) 00:17:08.065 fused_ordering(102) 00:17:08.065 fused_ordering(103) 00:17:08.065 fused_ordering(104) 00:17:08.065 fused_ordering(105) 00:17:08.065 fused_ordering(106) 00:17:08.065 fused_ordering(107) 00:17:08.065 fused_ordering(108) 00:17:08.065 fused_ordering(109) 00:17:08.065 fused_ordering(110) 00:17:08.065 fused_ordering(111) 00:17:08.065 fused_ordering(112) 00:17:08.065 fused_ordering(113) 00:17:08.065 fused_ordering(114) 00:17:08.065 fused_ordering(115) 00:17:08.065 fused_ordering(116) 00:17:08.065 fused_ordering(117) 00:17:08.065 fused_ordering(118) 00:17:08.065 fused_ordering(119) 00:17:08.065 fused_ordering(120) 00:17:08.066 fused_ordering(121) 00:17:08.066 fused_ordering(122) 00:17:08.066 fused_ordering(123) 00:17:08.066 fused_ordering(124) 00:17:08.066 fused_ordering(125) 00:17:08.066 fused_ordering(126) 00:17:08.066 fused_ordering(127) 00:17:08.066 fused_ordering(128) 00:17:08.066 fused_ordering(129) 00:17:08.066 fused_ordering(130) 00:17:08.066 fused_ordering(131) 00:17:08.066 fused_ordering(132) 00:17:08.066 fused_ordering(133) 00:17:08.066 fused_ordering(134) 00:17:08.066 fused_ordering(135) 00:17:08.066 fused_ordering(136) 00:17:08.066 fused_ordering(137) 00:17:08.066 fused_ordering(138) 00:17:08.066 fused_ordering(139) 00:17:08.066 fused_ordering(140) 00:17:08.066 fused_ordering(141) 00:17:08.066 fused_ordering(142) 00:17:08.066 fused_ordering(143) 00:17:08.066 fused_ordering(144) 00:17:08.066 fused_ordering(145) 00:17:08.066 fused_ordering(146) 00:17:08.066 fused_ordering(147) 00:17:08.066 fused_ordering(148) 00:17:08.066 fused_ordering(149) 00:17:08.066 fused_ordering(150) 00:17:08.066 fused_ordering(151) 00:17:08.066 fused_ordering(152) 00:17:08.066 fused_ordering(153) 00:17:08.066 fused_ordering(154) 00:17:08.066 fused_ordering(155) 00:17:08.066 fused_ordering(156) 00:17:08.066 fused_ordering(157) 00:17:08.066 fused_ordering(158) 00:17:08.066 fused_ordering(159) 00:17:08.066 fused_ordering(160) 00:17:08.066 fused_ordering(161) 00:17:08.066 fused_ordering(162) 00:17:08.066 fused_ordering(163) 00:17:08.066 fused_ordering(164) 00:17:08.066 fused_ordering(165) 00:17:08.066 fused_ordering(166) 00:17:08.066 fused_ordering(167) 00:17:08.066 fused_ordering(168) 00:17:08.066 fused_ordering(169) 00:17:08.066 fused_ordering(170) 00:17:08.066 fused_ordering(171) 00:17:08.066 fused_ordering(172) 00:17:08.066 fused_ordering(173) 00:17:08.066 fused_ordering(174) 00:17:08.066 fused_ordering(175) 00:17:08.066 fused_ordering(176) 00:17:08.066 fused_ordering(177) 00:17:08.066 fused_ordering(178) 00:17:08.066 fused_ordering(179) 00:17:08.066 fused_ordering(180) 00:17:08.066 fused_ordering(181) 00:17:08.066 fused_ordering(182) 00:17:08.066 fused_ordering(183) 00:17:08.066 fused_ordering(184) 00:17:08.066 fused_ordering(185) 00:17:08.066 fused_ordering(186) 00:17:08.066 fused_ordering(187) 00:17:08.066 fused_ordering(188) 00:17:08.066 fused_ordering(189) 00:17:08.066 fused_ordering(190) 00:17:08.066 fused_ordering(191) 00:17:08.066 fused_ordering(192) 00:17:08.066 fused_ordering(193) 00:17:08.066 fused_ordering(194) 00:17:08.066 fused_ordering(195) 00:17:08.066 fused_ordering(196) 00:17:08.066 fused_ordering(197) 00:17:08.066 fused_ordering(198) 00:17:08.066 fused_ordering(199) 00:17:08.066 fused_ordering(200) 00:17:08.066 fused_ordering(201) 00:17:08.066 fused_ordering(202) 00:17:08.066 fused_ordering(203) 00:17:08.066 fused_ordering(204) 00:17:08.066 fused_ordering(205) 00:17:08.326 fused_ordering(206) 00:17:08.326 fused_ordering(207) 00:17:08.326 fused_ordering(208) 00:17:08.326 fused_ordering(209) 00:17:08.326 fused_ordering(210) 00:17:08.326 fused_ordering(211) 00:17:08.326 fused_ordering(212) 00:17:08.326 fused_ordering(213) 00:17:08.326 fused_ordering(214) 00:17:08.326 fused_ordering(215) 00:17:08.326 fused_ordering(216) 00:17:08.326 fused_ordering(217) 00:17:08.326 fused_ordering(218) 00:17:08.326 fused_ordering(219) 00:17:08.326 fused_ordering(220) 00:17:08.326 fused_ordering(221) 00:17:08.326 fused_ordering(222) 00:17:08.326 fused_ordering(223) 00:17:08.326 fused_ordering(224) 00:17:08.326 fused_ordering(225) 00:17:08.326 fused_ordering(226) 00:17:08.326 fused_ordering(227) 00:17:08.326 fused_ordering(228) 00:17:08.326 fused_ordering(229) 00:17:08.326 fused_ordering(230) 00:17:08.326 fused_ordering(231) 00:17:08.326 fused_ordering(232) 00:17:08.326 fused_ordering(233) 00:17:08.326 fused_ordering(234) 00:17:08.326 fused_ordering(235) 00:17:08.326 fused_ordering(236) 00:17:08.326 fused_ordering(237) 00:17:08.326 fused_ordering(238) 00:17:08.326 fused_ordering(239) 00:17:08.326 fused_ordering(240) 00:17:08.326 fused_ordering(241) 00:17:08.326 fused_ordering(242) 00:17:08.326 fused_ordering(243) 00:17:08.326 fused_ordering(244) 00:17:08.326 fused_ordering(245) 00:17:08.326 fused_ordering(246) 00:17:08.326 fused_ordering(247) 00:17:08.326 fused_ordering(248) 00:17:08.326 fused_ordering(249) 00:17:08.326 fused_ordering(250) 00:17:08.326 fused_ordering(251) 00:17:08.326 fused_ordering(252) 00:17:08.326 fused_ordering(253) 00:17:08.326 fused_ordering(254) 00:17:08.326 fused_ordering(255) 00:17:08.326 fused_ordering(256) 00:17:08.326 fused_ordering(257) 00:17:08.326 fused_ordering(258) 00:17:08.326 fused_ordering(259) 00:17:08.326 fused_ordering(260) 00:17:08.326 fused_ordering(261) 00:17:08.326 fused_ordering(262) 00:17:08.326 fused_ordering(263) 00:17:08.326 fused_ordering(264) 00:17:08.326 fused_ordering(265) 00:17:08.326 fused_ordering(266) 00:17:08.326 fused_ordering(267) 00:17:08.326 fused_ordering(268) 00:17:08.326 fused_ordering(269) 00:17:08.326 fused_ordering(270) 00:17:08.326 fused_ordering(271) 00:17:08.326 fused_ordering(272) 00:17:08.326 fused_ordering(273) 00:17:08.326 fused_ordering(274) 00:17:08.326 fused_ordering(275) 00:17:08.326 fused_ordering(276) 00:17:08.326 fused_ordering(277) 00:17:08.326 fused_ordering(278) 00:17:08.326 fused_ordering(279) 00:17:08.326 fused_ordering(280) 00:17:08.326 fused_ordering(281) 00:17:08.326 fused_ordering(282) 00:17:08.326 fused_ordering(283) 00:17:08.326 fused_ordering(284) 00:17:08.326 fused_ordering(285) 00:17:08.326 fused_ordering(286) 00:17:08.326 fused_ordering(287) 00:17:08.326 fused_ordering(288) 00:17:08.326 fused_ordering(289) 00:17:08.326 fused_ordering(290) 00:17:08.326 fused_ordering(291) 00:17:08.326 fused_ordering(292) 00:17:08.326 fused_ordering(293) 00:17:08.326 fused_ordering(294) 00:17:08.326 fused_ordering(295) 00:17:08.326 fused_ordering(296) 00:17:08.326 fused_ordering(297) 00:17:08.326 fused_ordering(298) 00:17:08.326 fused_ordering(299) 00:17:08.326 fused_ordering(300) 00:17:08.326 fused_ordering(301) 00:17:08.326 fused_ordering(302) 00:17:08.326 fused_ordering(303) 00:17:08.326 fused_ordering(304) 00:17:08.326 fused_ordering(305) 00:17:08.326 fused_ordering(306) 00:17:08.326 fused_ordering(307) 00:17:08.326 fused_ordering(308) 00:17:08.326 fused_ordering(309) 00:17:08.326 fused_ordering(310) 00:17:08.326 fused_ordering(311) 00:17:08.326 fused_ordering(312) 00:17:08.326 fused_ordering(313) 00:17:08.326 fused_ordering(314) 00:17:08.326 fused_ordering(315) 00:17:08.326 fused_ordering(316) 00:17:08.326 fused_ordering(317) 00:17:08.326 fused_ordering(318) 00:17:08.326 fused_ordering(319) 00:17:08.326 fused_ordering(320) 00:17:08.326 fused_ordering(321) 00:17:08.326 fused_ordering(322) 00:17:08.326 fused_ordering(323) 00:17:08.326 fused_ordering(324) 00:17:08.326 fused_ordering(325) 00:17:08.326 fused_ordering(326) 00:17:08.326 fused_ordering(327) 00:17:08.326 fused_ordering(328) 00:17:08.326 fused_ordering(329) 00:17:08.326 fused_ordering(330) 00:17:08.326 fused_ordering(331) 00:17:08.326 fused_ordering(332) 00:17:08.326 fused_ordering(333) 00:17:08.326 fused_ordering(334) 00:17:08.326 fused_ordering(335) 00:17:08.326 fused_ordering(336) 00:17:08.326 fused_ordering(337) 00:17:08.326 fused_ordering(338) 00:17:08.326 fused_ordering(339) 00:17:08.326 fused_ordering(340) 00:17:08.326 fused_ordering(341) 00:17:08.326 fused_ordering(342) 00:17:08.326 fused_ordering(343) 00:17:08.326 fused_ordering(344) 00:17:08.326 fused_ordering(345) 00:17:08.326 fused_ordering(346) 00:17:08.326 fused_ordering(347) 00:17:08.326 fused_ordering(348) 00:17:08.326 fused_ordering(349) 00:17:08.326 fused_ordering(350) 00:17:08.326 fused_ordering(351) 00:17:08.326 fused_ordering(352) 00:17:08.326 fused_ordering(353) 00:17:08.326 fused_ordering(354) 00:17:08.326 fused_ordering(355) 00:17:08.326 fused_ordering(356) 00:17:08.326 fused_ordering(357) 00:17:08.326 fused_ordering(358) 00:17:08.326 fused_ordering(359) 00:17:08.326 fused_ordering(360) 00:17:08.326 fused_ordering(361) 00:17:08.326 fused_ordering(362) 00:17:08.326 fused_ordering(363) 00:17:08.326 fused_ordering(364) 00:17:08.326 fused_ordering(365) 00:17:08.326 fused_ordering(366) 00:17:08.326 fused_ordering(367) 00:17:08.326 fused_ordering(368) 00:17:08.326 fused_ordering(369) 00:17:08.326 fused_ordering(370) 00:17:08.326 fused_ordering(371) 00:17:08.326 fused_ordering(372) 00:17:08.326 fused_ordering(373) 00:17:08.326 fused_ordering(374) 00:17:08.326 fused_ordering(375) 00:17:08.326 fused_ordering(376) 00:17:08.326 fused_ordering(377) 00:17:08.326 fused_ordering(378) 00:17:08.326 fused_ordering(379) 00:17:08.326 fused_ordering(380) 00:17:08.326 fused_ordering(381) 00:17:08.326 fused_ordering(382) 00:17:08.326 fused_ordering(383) 00:17:08.326 fused_ordering(384) 00:17:08.326 fused_ordering(385) 00:17:08.326 fused_ordering(386) 00:17:08.326 fused_ordering(387) 00:17:08.326 fused_ordering(388) 00:17:08.326 fused_ordering(389) 00:17:08.326 fused_ordering(390) 00:17:08.326 fused_ordering(391) 00:17:08.326 fused_ordering(392) 00:17:08.326 fused_ordering(393) 00:17:08.326 fused_ordering(394) 00:17:08.326 fused_ordering(395) 00:17:08.326 fused_ordering(396) 00:17:08.326 fused_ordering(397) 00:17:08.326 fused_ordering(398) 00:17:08.326 fused_ordering(399) 00:17:08.326 fused_ordering(400) 00:17:08.326 fused_ordering(401) 00:17:08.326 fused_ordering(402) 00:17:08.326 fused_ordering(403) 00:17:08.326 fused_ordering(404) 00:17:08.326 fused_ordering(405) 00:17:08.326 fused_ordering(406) 00:17:08.326 fused_ordering(407) 00:17:08.326 fused_ordering(408) 00:17:08.326 fused_ordering(409) 00:17:08.326 fused_ordering(410) 00:17:08.586 fused_ordering(411) 00:17:08.586 fused_ordering(412) 00:17:08.586 fused_ordering(413) 00:17:08.586 fused_ordering(414) 00:17:08.586 fused_ordering(415) 00:17:08.586 fused_ordering(416) 00:17:08.586 fused_ordering(417) 00:17:08.586 fused_ordering(418) 00:17:08.586 fused_ordering(419) 00:17:08.586 fused_ordering(420) 00:17:08.586 fused_ordering(421) 00:17:08.586 fused_ordering(422) 00:17:08.586 fused_ordering(423) 00:17:08.586 fused_ordering(424) 00:17:08.586 fused_ordering(425) 00:17:08.586 fused_ordering(426) 00:17:08.586 fused_ordering(427) 00:17:08.586 fused_ordering(428) 00:17:08.586 fused_ordering(429) 00:17:08.586 fused_ordering(430) 00:17:08.586 fused_ordering(431) 00:17:08.586 fused_ordering(432) 00:17:08.586 fused_ordering(433) 00:17:08.586 fused_ordering(434) 00:17:08.586 fused_ordering(435) 00:17:08.586 fused_ordering(436) 00:17:08.586 fused_ordering(437) 00:17:08.586 fused_ordering(438) 00:17:08.586 fused_ordering(439) 00:17:08.586 fused_ordering(440) 00:17:08.586 fused_ordering(441) 00:17:08.586 fused_ordering(442) 00:17:08.586 fused_ordering(443) 00:17:08.586 fused_ordering(444) 00:17:08.586 fused_ordering(445) 00:17:08.586 fused_ordering(446) 00:17:08.586 fused_ordering(447) 00:17:08.586 fused_ordering(448) 00:17:08.586 fused_ordering(449) 00:17:08.586 fused_ordering(450) 00:17:08.586 fused_ordering(451) 00:17:08.586 fused_ordering(452) 00:17:08.586 fused_ordering(453) 00:17:08.586 fused_ordering(454) 00:17:08.586 fused_ordering(455) 00:17:08.586 fused_ordering(456) 00:17:08.586 fused_ordering(457) 00:17:08.586 fused_ordering(458) 00:17:08.586 fused_ordering(459) 00:17:08.586 fused_ordering(460) 00:17:08.586 fused_ordering(461) 00:17:08.586 fused_ordering(462) 00:17:08.586 fused_ordering(463) 00:17:08.586 fused_ordering(464) 00:17:08.586 fused_ordering(465) 00:17:08.586 fused_ordering(466) 00:17:08.586 fused_ordering(467) 00:17:08.586 fused_ordering(468) 00:17:08.586 fused_ordering(469) 00:17:08.586 fused_ordering(470) 00:17:08.586 fused_ordering(471) 00:17:08.586 fused_ordering(472) 00:17:08.586 fused_ordering(473) 00:17:08.586 fused_ordering(474) 00:17:08.586 fused_ordering(475) 00:17:08.586 fused_ordering(476) 00:17:08.586 fused_ordering(477) 00:17:08.586 fused_ordering(478) 00:17:08.586 fused_ordering(479) 00:17:08.586 fused_ordering(480) 00:17:08.586 fused_ordering(481) 00:17:08.586 fused_ordering(482) 00:17:08.586 fused_ordering(483) 00:17:08.586 fused_ordering(484) 00:17:08.586 fused_ordering(485) 00:17:08.586 fused_ordering(486) 00:17:08.586 fused_ordering(487) 00:17:08.586 fused_ordering(488) 00:17:08.586 fused_ordering(489) 00:17:08.586 fused_ordering(490) 00:17:08.586 fused_ordering(491) 00:17:08.586 fused_ordering(492) 00:17:08.586 fused_ordering(493) 00:17:08.586 fused_ordering(494) 00:17:08.586 fused_ordering(495) 00:17:08.586 fused_ordering(496) 00:17:08.586 fused_ordering(497) 00:17:08.586 fused_ordering(498) 00:17:08.586 fused_ordering(499) 00:17:08.586 fused_ordering(500) 00:17:08.586 fused_ordering(501) 00:17:08.586 fused_ordering(502) 00:17:08.586 fused_ordering(503) 00:17:08.586 fused_ordering(504) 00:17:08.586 fused_ordering(505) 00:17:08.586 fused_ordering(506) 00:17:08.586 fused_ordering(507) 00:17:08.586 fused_ordering(508) 00:17:08.586 fused_ordering(509) 00:17:08.586 fused_ordering(510) 00:17:08.586 fused_ordering(511) 00:17:08.586 fused_ordering(512) 00:17:08.586 fused_ordering(513) 00:17:08.586 fused_ordering(514) 00:17:08.586 fused_ordering(515) 00:17:08.586 fused_ordering(516) 00:17:08.586 fused_ordering(517) 00:17:08.586 fused_ordering(518) 00:17:08.586 fused_ordering(519) 00:17:08.586 fused_ordering(520) 00:17:08.586 fused_ordering(521) 00:17:08.586 fused_ordering(522) 00:17:08.586 fused_ordering(523) 00:17:08.586 fused_ordering(524) 00:17:08.586 fused_ordering(525) 00:17:08.586 fused_ordering(526) 00:17:08.586 fused_ordering(527) 00:17:08.586 fused_ordering(528) 00:17:08.586 fused_ordering(529) 00:17:08.586 fused_ordering(530) 00:17:08.586 fused_ordering(531) 00:17:08.586 fused_ordering(532) 00:17:08.586 fused_ordering(533) 00:17:08.586 fused_ordering(534) 00:17:08.586 fused_ordering(535) 00:17:08.586 fused_ordering(536) 00:17:08.586 fused_ordering(537) 00:17:08.586 fused_ordering(538) 00:17:08.586 fused_ordering(539) 00:17:08.586 fused_ordering(540) 00:17:08.586 fused_ordering(541) 00:17:08.586 fused_ordering(542) 00:17:08.586 fused_ordering(543) 00:17:08.587 fused_ordering(544) 00:17:08.587 fused_ordering(545) 00:17:08.587 fused_ordering(546) 00:17:08.587 fused_ordering(547) 00:17:08.587 fused_ordering(548) 00:17:08.587 fused_ordering(549) 00:17:08.587 fused_ordering(550) 00:17:08.587 fused_ordering(551) 00:17:08.587 fused_ordering(552) 00:17:08.587 fused_ordering(553) 00:17:08.587 fused_ordering(554) 00:17:08.587 fused_ordering(555) 00:17:08.587 fused_ordering(556) 00:17:08.587 fused_ordering(557) 00:17:08.587 fused_ordering(558) 00:17:08.587 fused_ordering(559) 00:17:08.587 fused_ordering(560) 00:17:08.587 fused_ordering(561) 00:17:08.587 fused_ordering(562) 00:17:08.587 fused_ordering(563) 00:17:08.587 fused_ordering(564) 00:17:08.587 fused_ordering(565) 00:17:08.587 fused_ordering(566) 00:17:08.587 fused_ordering(567) 00:17:08.587 fused_ordering(568) 00:17:08.587 fused_ordering(569) 00:17:08.587 fused_ordering(570) 00:17:08.587 fused_ordering(571) 00:17:08.587 fused_ordering(572) 00:17:08.587 fused_ordering(573) 00:17:08.587 fused_ordering(574) 00:17:08.587 fused_ordering(575) 00:17:08.587 fused_ordering(576) 00:17:08.587 fused_ordering(577) 00:17:08.587 fused_ordering(578) 00:17:08.587 fused_ordering(579) 00:17:08.587 fused_ordering(580) 00:17:08.587 fused_ordering(581) 00:17:08.587 fused_ordering(582) 00:17:08.587 fused_ordering(583) 00:17:08.587 fused_ordering(584) 00:17:08.587 fused_ordering(585) 00:17:08.587 fused_ordering(586) 00:17:08.587 fused_ordering(587) 00:17:08.587 fused_ordering(588) 00:17:08.587 fused_ordering(589) 00:17:08.587 fused_ordering(590) 00:17:08.587 fused_ordering(591) 00:17:08.587 fused_ordering(592) 00:17:08.587 fused_ordering(593) 00:17:08.587 fused_ordering(594) 00:17:08.587 fused_ordering(595) 00:17:08.587 fused_ordering(596) 00:17:08.587 fused_ordering(597) 00:17:08.587 fused_ordering(598) 00:17:08.587 fused_ordering(599) 00:17:08.587 fused_ordering(600) 00:17:08.587 fused_ordering(601) 00:17:08.587 fused_ordering(602) 00:17:08.587 fused_ordering(603) 00:17:08.587 fused_ordering(604) 00:17:08.587 fused_ordering(605) 00:17:08.587 fused_ordering(606) 00:17:08.587 fused_ordering(607) 00:17:08.587 fused_ordering(608) 00:17:08.587 fused_ordering(609) 00:17:08.587 fused_ordering(610) 00:17:08.587 fused_ordering(611) 00:17:08.587 fused_ordering(612) 00:17:08.587 fused_ordering(613) 00:17:08.587 fused_ordering(614) 00:17:08.587 fused_ordering(615) 00:17:08.846 fused_ordering(616) 00:17:08.846 fused_ordering(617) 00:17:08.846 fused_ordering(618) 00:17:08.846 fused_ordering(619) 00:17:08.846 fused_ordering(620) 00:17:08.846 fused_ordering(621) 00:17:08.846 fused_ordering(622) 00:17:08.846 fused_ordering(623) 00:17:08.846 fused_ordering(624) 00:17:08.847 fused_ordering(625) 00:17:08.847 fused_ordering(626) 00:17:08.847 fused_ordering(627) 00:17:08.847 fused_ordering(628) 00:17:08.847 fused_ordering(629) 00:17:08.847 fused_ordering(630) 00:17:08.847 fused_ordering(631) 00:17:08.847 fused_ordering(632) 00:17:08.847 fused_ordering(633) 00:17:08.847 fused_ordering(634) 00:17:08.847 fused_ordering(635) 00:17:08.847 fused_ordering(636) 00:17:08.847 fused_ordering(637) 00:17:08.847 fused_ordering(638) 00:17:08.847 fused_ordering(639) 00:17:08.847 fused_ordering(640) 00:17:08.847 fused_ordering(641) 00:17:08.847 fused_ordering(642) 00:17:08.847 fused_ordering(643) 00:17:08.847 fused_ordering(644) 00:17:08.847 fused_ordering(645) 00:17:08.847 fused_ordering(646) 00:17:08.847 fused_ordering(647) 00:17:08.847 fused_ordering(648) 00:17:08.847 fused_ordering(649) 00:17:08.847 fused_ordering(650) 00:17:08.847 fused_ordering(651) 00:17:08.847 fused_ordering(652) 00:17:08.847 fused_ordering(653) 00:17:08.847 fused_ordering(654) 00:17:08.847 fused_ordering(655) 00:17:08.847 fused_ordering(656) 00:17:08.847 fused_ordering(657) 00:17:08.847 fused_ordering(658) 00:17:08.847 fused_ordering(659) 00:17:08.847 fused_ordering(660) 00:17:08.847 fused_ordering(661) 00:17:08.847 fused_ordering(662) 00:17:08.847 fused_ordering(663) 00:17:08.847 fused_ordering(664) 00:17:08.847 fused_ordering(665) 00:17:08.847 fused_ordering(666) 00:17:08.847 fused_ordering(667) 00:17:08.847 fused_ordering(668) 00:17:08.847 fused_ordering(669) 00:17:08.847 fused_ordering(670) 00:17:08.847 fused_ordering(671) 00:17:08.847 fused_ordering(672) 00:17:08.847 fused_ordering(673) 00:17:08.847 fused_ordering(674) 00:17:08.847 fused_ordering(675) 00:17:08.847 fused_ordering(676) 00:17:08.847 fused_ordering(677) 00:17:08.847 fused_ordering(678) 00:17:08.847 fused_ordering(679) 00:17:08.847 fused_ordering(680) 00:17:08.847 fused_ordering(681) 00:17:08.847 fused_ordering(682) 00:17:08.847 fused_ordering(683) 00:17:08.847 fused_ordering(684) 00:17:08.847 fused_ordering(685) 00:17:08.847 fused_ordering(686) 00:17:08.847 fused_ordering(687) 00:17:08.847 fused_ordering(688) 00:17:08.847 fused_ordering(689) 00:17:08.847 fused_ordering(690) 00:17:08.847 fused_ordering(691) 00:17:08.847 fused_ordering(692) 00:17:08.847 fused_ordering(693) 00:17:08.847 fused_ordering(694) 00:17:08.847 fused_ordering(695) 00:17:08.847 fused_ordering(696) 00:17:08.847 fused_ordering(697) 00:17:08.847 fused_ordering(698) 00:17:08.847 fused_ordering(699) 00:17:08.847 fused_ordering(700) 00:17:08.847 fused_ordering(701) 00:17:08.847 fused_ordering(702) 00:17:08.847 fused_ordering(703) 00:17:08.847 fused_ordering(704) 00:17:08.847 fused_ordering(705) 00:17:08.847 fused_ordering(706) 00:17:08.847 fused_ordering(707) 00:17:08.847 fused_ordering(708) 00:17:08.847 fused_ordering(709) 00:17:08.847 fused_ordering(710) 00:17:08.847 fused_ordering(711) 00:17:08.847 fused_ordering(712) 00:17:08.847 fused_ordering(713) 00:17:08.847 fused_ordering(714) 00:17:08.847 fused_ordering(715) 00:17:08.847 fused_ordering(716) 00:17:08.847 fused_ordering(717) 00:17:08.847 fused_ordering(718) 00:17:08.847 fused_ordering(719) 00:17:08.847 fused_ordering(720) 00:17:08.847 fused_ordering(721) 00:17:08.847 fused_ordering(722) 00:17:08.847 fused_ordering(723) 00:17:08.847 fused_ordering(724) 00:17:08.847 fused_ordering(725) 00:17:08.847 fused_ordering(726) 00:17:08.847 fused_ordering(727) 00:17:08.847 fused_ordering(728) 00:17:08.847 fused_ordering(729) 00:17:08.847 fused_ordering(730) 00:17:08.847 fused_ordering(731) 00:17:08.847 fused_ordering(732) 00:17:08.847 fused_ordering(733) 00:17:08.847 fused_ordering(734) 00:17:08.847 fused_ordering(735) 00:17:08.847 fused_ordering(736) 00:17:08.847 fused_ordering(737) 00:17:08.847 fused_ordering(738) 00:17:08.847 fused_ordering(739) 00:17:08.847 fused_ordering(740) 00:17:08.847 fused_ordering(741) 00:17:08.847 fused_ordering(742) 00:17:08.847 fused_ordering(743) 00:17:08.847 fused_ordering(744) 00:17:08.847 fused_ordering(745) 00:17:08.847 fused_ordering(746) 00:17:08.847 fused_ordering(747) 00:17:08.847 fused_ordering(748) 00:17:08.847 fused_ordering(749) 00:17:08.847 fused_ordering(750) 00:17:08.847 fused_ordering(751) 00:17:08.847 fused_ordering(752) 00:17:08.847 fused_ordering(753) 00:17:08.847 fused_ordering(754) 00:17:08.847 fused_ordering(755) 00:17:08.847 fused_ordering(756) 00:17:08.847 fused_ordering(757) 00:17:08.847 fused_ordering(758) 00:17:08.847 fused_ordering(759) 00:17:08.847 fused_ordering(760) 00:17:08.847 fused_ordering(761) 00:17:08.847 fused_ordering(762) 00:17:08.847 fused_ordering(763) 00:17:08.847 fused_ordering(764) 00:17:08.847 fused_ordering(765) 00:17:08.847 fused_ordering(766) 00:17:08.847 fused_ordering(767) 00:17:08.847 fused_ordering(768) 00:17:08.847 fused_ordering(769) 00:17:08.847 fused_ordering(770) 00:17:08.847 fused_ordering(771) 00:17:08.847 fused_ordering(772) 00:17:08.847 fused_ordering(773) 00:17:08.847 fused_ordering(774) 00:17:08.847 fused_ordering(775) 00:17:08.847 fused_ordering(776) 00:17:08.847 fused_ordering(777) 00:17:08.847 fused_ordering(778) 00:17:08.847 fused_ordering(779) 00:17:08.847 fused_ordering(780) 00:17:08.847 fused_ordering(781) 00:17:08.847 fused_ordering(782) 00:17:08.847 fused_ordering(783) 00:17:08.847 fused_ordering(784) 00:17:08.847 fused_ordering(785) 00:17:08.847 fused_ordering(786) 00:17:08.847 fused_ordering(787) 00:17:08.847 fused_ordering(788) 00:17:08.847 fused_ordering(789) 00:17:08.847 fused_ordering(790) 00:17:08.847 fused_ordering(791) 00:17:08.847 fused_ordering(792) 00:17:08.847 fused_ordering(793) 00:17:08.847 fused_ordering(794) 00:17:08.847 fused_ordering(795) 00:17:08.847 fused_ordering(796) 00:17:08.847 fused_ordering(797) 00:17:08.847 fused_ordering(798) 00:17:08.847 fused_ordering(799) 00:17:08.847 fused_ordering(800) 00:17:08.847 fused_ordering(801) 00:17:08.847 fused_ordering(802) 00:17:08.847 fused_ordering(803) 00:17:08.847 fused_ordering(804) 00:17:08.847 fused_ordering(805) 00:17:08.847 fused_ordering(806) 00:17:08.847 fused_ordering(807) 00:17:08.847 fused_ordering(808) 00:17:08.847 fused_ordering(809) 00:17:08.847 fused_ordering(810) 00:17:08.847 fused_ordering(811) 00:17:08.847 fused_ordering(812) 00:17:08.847 fused_ordering(813) 00:17:08.847 fused_ordering(814) 00:17:08.847 fused_ordering(815) 00:17:08.847 fused_ordering(816) 00:17:08.847 fused_ordering(817) 00:17:08.847 fused_ordering(818) 00:17:08.847 fused_ordering(819) 00:17:08.847 fused_ordering(820) 00:17:09.417 fused_ordering(821) 00:17:09.417 fused_ordering(822) 00:17:09.417 fused_ordering(823) 00:17:09.417 fused_ordering(824) 00:17:09.417 fused_ordering(825) 00:17:09.417 fused_ordering(826) 00:17:09.417 fused_ordering(827) 00:17:09.417 fused_ordering(828) 00:17:09.417 fused_ordering(829) 00:17:09.417 fused_ordering(830) 00:17:09.417 fused_ordering(831) 00:17:09.417 fused_ordering(832) 00:17:09.417 fused_ordering(833) 00:17:09.417 fused_ordering(834) 00:17:09.417 fused_ordering(835) 00:17:09.417 fused_ordering(836) 00:17:09.417 fused_ordering(837) 00:17:09.417 fused_ordering(838) 00:17:09.417 fused_ordering(839) 00:17:09.417 fused_ordering(840) 00:17:09.417 fused_ordering(841) 00:17:09.417 fused_ordering(842) 00:17:09.417 fused_ordering(843) 00:17:09.417 fused_ordering(844) 00:17:09.417 fused_ordering(845) 00:17:09.417 fused_ordering(846) 00:17:09.417 fused_ordering(847) 00:17:09.417 fused_ordering(848) 00:17:09.417 fused_ordering(849) 00:17:09.417 fused_ordering(850) 00:17:09.417 fused_ordering(851) 00:17:09.417 fused_ordering(852) 00:17:09.417 fused_ordering(853) 00:17:09.417 fused_ordering(854) 00:17:09.417 fused_ordering(855) 00:17:09.417 fused_ordering(856) 00:17:09.417 fused_ordering(857) 00:17:09.417 fused_ordering(858) 00:17:09.417 fused_ordering(859) 00:17:09.417 fused_ordering(860) 00:17:09.417 fused_ordering(861) 00:17:09.417 fused_ordering(862) 00:17:09.417 fused_ordering(863) 00:17:09.417 fused_ordering(864) 00:17:09.417 fused_ordering(865) 00:17:09.417 fused_ordering(866) 00:17:09.417 fused_ordering(867) 00:17:09.417 fused_ordering(868) 00:17:09.417 fused_ordering(869) 00:17:09.417 fused_ordering(870) 00:17:09.417 fused_ordering(871) 00:17:09.417 fused_ordering(872) 00:17:09.417 fused_ordering(873) 00:17:09.417 fused_ordering(874) 00:17:09.417 fused_ordering(875) 00:17:09.417 fused_ordering(876) 00:17:09.417 fused_ordering(877) 00:17:09.417 fused_ordering(878) 00:17:09.417 fused_ordering(879) 00:17:09.417 fused_ordering(880) 00:17:09.417 fused_ordering(881) 00:17:09.417 fused_ordering(882) 00:17:09.417 fused_ordering(883) 00:17:09.417 fused_ordering(884) 00:17:09.417 fused_ordering(885) 00:17:09.417 fused_ordering(886) 00:17:09.417 fused_ordering(887) 00:17:09.417 fused_ordering(888) 00:17:09.417 fused_ordering(889) 00:17:09.417 fused_ordering(890) 00:17:09.417 fused_ordering(891) 00:17:09.417 fused_ordering(892) 00:17:09.417 fused_ordering(893) 00:17:09.417 fused_ordering(894) 00:17:09.417 fused_ordering(895) 00:17:09.417 fused_ordering(896) 00:17:09.417 fused_ordering(897) 00:17:09.417 fused_ordering(898) 00:17:09.417 fused_ordering(899) 00:17:09.417 fused_ordering(900) 00:17:09.417 fused_ordering(901) 00:17:09.417 fused_ordering(902) 00:17:09.417 fused_ordering(903) 00:17:09.417 fused_ordering(904) 00:17:09.417 fused_ordering(905) 00:17:09.417 fused_ordering(906) 00:17:09.417 fused_ordering(907) 00:17:09.417 fused_ordering(908) 00:17:09.417 fused_ordering(909) 00:17:09.417 fused_ordering(910) 00:17:09.417 fused_ordering(911) 00:17:09.417 fused_ordering(912) 00:17:09.417 fused_ordering(913) 00:17:09.417 fused_ordering(914) 00:17:09.417 fused_ordering(915) 00:17:09.417 fused_ordering(916) 00:17:09.417 fused_ordering(917) 00:17:09.417 fused_ordering(918) 00:17:09.417 fused_ordering(919) 00:17:09.417 fused_ordering(920) 00:17:09.417 fused_ordering(921) 00:17:09.417 fused_ordering(922) 00:17:09.417 fused_ordering(923) 00:17:09.417 fused_ordering(924) 00:17:09.417 fused_ordering(925) 00:17:09.417 fused_ordering(926) 00:17:09.417 fused_ordering(927) 00:17:09.417 fused_ordering(928) 00:17:09.417 fused_ordering(929) 00:17:09.417 fused_ordering(930) 00:17:09.417 fused_ordering(931) 00:17:09.417 fused_ordering(932) 00:17:09.417 fused_ordering(933) 00:17:09.417 fused_ordering(934) 00:17:09.417 fused_ordering(935) 00:17:09.417 fused_ordering(936) 00:17:09.417 fused_ordering(937) 00:17:09.417 fused_ordering(938) 00:17:09.417 fused_ordering(939) 00:17:09.417 fused_ordering(940) 00:17:09.417 fused_ordering(941) 00:17:09.417 fused_ordering(942) 00:17:09.417 fused_ordering(943) 00:17:09.417 fused_ordering(944) 00:17:09.417 fused_ordering(945) 00:17:09.417 fused_ordering(946) 00:17:09.417 fused_ordering(947) 00:17:09.417 fused_ordering(948) 00:17:09.417 fused_ordering(949) 00:17:09.417 fused_ordering(950) 00:17:09.417 fused_ordering(951) 00:17:09.417 fused_ordering(952) 00:17:09.417 fused_ordering(953) 00:17:09.417 fused_ordering(954) 00:17:09.417 fused_ordering(955) 00:17:09.417 fused_ordering(956) 00:17:09.417 fused_ordering(957) 00:17:09.417 fused_ordering(958) 00:17:09.417 fused_ordering(959) 00:17:09.417 fused_ordering(960) 00:17:09.417 fused_ordering(961) 00:17:09.417 fused_ordering(962) 00:17:09.417 fused_ordering(963) 00:17:09.417 fused_ordering(964) 00:17:09.417 fused_ordering(965) 00:17:09.417 fused_ordering(966) 00:17:09.417 fused_ordering(967) 00:17:09.417 fused_ordering(968) 00:17:09.417 fused_ordering(969) 00:17:09.417 fused_ordering(970) 00:17:09.417 fused_ordering(971) 00:17:09.417 fused_ordering(972) 00:17:09.417 fused_ordering(973) 00:17:09.417 fused_ordering(974) 00:17:09.417 fused_ordering(975) 00:17:09.417 fused_ordering(976) 00:17:09.417 fused_ordering(977) 00:17:09.417 fused_ordering(978) 00:17:09.417 fused_ordering(979) 00:17:09.417 fused_ordering(980) 00:17:09.417 fused_ordering(981) 00:17:09.417 fused_ordering(982) 00:17:09.417 fused_ordering(983) 00:17:09.417 fused_ordering(984) 00:17:09.417 fused_ordering(985) 00:17:09.417 fused_ordering(986) 00:17:09.417 fused_ordering(987) 00:17:09.417 fused_ordering(988) 00:17:09.417 fused_ordering(989) 00:17:09.417 fused_ordering(990) 00:17:09.417 fused_ordering(991) 00:17:09.417 fused_ordering(992) 00:17:09.417 fused_ordering(993) 00:17:09.417 fused_ordering(994) 00:17:09.417 fused_ordering(995) 00:17:09.417 fused_ordering(996) 00:17:09.417 fused_ordering(997) 00:17:09.417 fused_ordering(998) 00:17:09.417 fused_ordering(999) 00:17:09.417 fused_ordering(1000) 00:17:09.417 fused_ordering(1001) 00:17:09.417 fused_ordering(1002) 00:17:09.417 fused_ordering(1003) 00:17:09.417 fused_ordering(1004) 00:17:09.417 fused_ordering(1005) 00:17:09.417 fused_ordering(1006) 00:17:09.417 fused_ordering(1007) 00:17:09.417 fused_ordering(1008) 00:17:09.417 fused_ordering(1009) 00:17:09.417 fused_ordering(1010) 00:17:09.417 fused_ordering(1011) 00:17:09.417 fused_ordering(1012) 00:17:09.417 fused_ordering(1013) 00:17:09.417 fused_ordering(1014) 00:17:09.417 fused_ordering(1015) 00:17:09.417 fused_ordering(1016) 00:17:09.417 fused_ordering(1017) 00:17:09.417 fused_ordering(1018) 00:17:09.417 fused_ordering(1019) 00:17:09.417 fused_ordering(1020) 00:17:09.417 fused_ordering(1021) 00:17:09.417 fused_ordering(1022) 00:17:09.417 fused_ordering(1023) 00:17:09.417 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:09.417 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:09.417 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:09.417 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:09.417 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:09.417 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:09.417 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:09.417 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:09.417 rmmod nvme_tcp 00:17:09.417 rmmod nvme_fabrics 00:17:09.417 rmmod nvme_keyring 00:17:09.417 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:09.417 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:09.417 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:09.418 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 284260 ']' 00:17:09.418 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 284260 00:17:09.418 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 284260 ']' 00:17:09.418 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 284260 00:17:09.418 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:09.418 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.418 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 284260 00:17:09.418 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:09.418 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:09.418 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 284260' 00:17:09.418 killing process with pid 284260 00:17:09.418 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 284260 00:17:09.418 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 284260 00:17:09.677 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:09.677 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:09.677 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:09.677 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:09.677 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:09.677 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:09.677 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:09.677 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:09.677 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:09.677 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.677 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.677 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.585 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:11.585 00:17:11.585 real 0m10.299s 00:17:11.585 user 0m4.832s 00:17:11.585 sys 0m5.372s 00:17:11.585 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.585 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:11.585 ************************************ 00:17:11.585 END TEST nvmf_fused_ordering 00:17:11.585 ************************************ 00:17:11.585 05:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:11.585 05:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:11.585 05:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.585 05:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:11.585 ************************************ 00:17:11.585 START TEST nvmf_ns_masking 00:17:11.585 ************************************ 00:17:11.585 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:11.846 * Looking for test storage... 00:17:11.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:11.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.846 --rc genhtml_branch_coverage=1 00:17:11.846 --rc genhtml_function_coverage=1 00:17:11.846 --rc genhtml_legend=1 00:17:11.846 --rc geninfo_all_blocks=1 00:17:11.846 --rc geninfo_unexecuted_blocks=1 00:17:11.846 00:17:11.846 ' 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:11.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.846 --rc genhtml_branch_coverage=1 00:17:11.846 --rc genhtml_function_coverage=1 00:17:11.846 --rc genhtml_legend=1 00:17:11.846 --rc geninfo_all_blocks=1 00:17:11.846 --rc geninfo_unexecuted_blocks=1 00:17:11.846 00:17:11.846 ' 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:11.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.846 --rc genhtml_branch_coverage=1 00:17:11.846 --rc genhtml_function_coverage=1 00:17:11.846 --rc genhtml_legend=1 00:17:11.846 --rc geninfo_all_blocks=1 00:17:11.846 --rc geninfo_unexecuted_blocks=1 00:17:11.846 00:17:11.846 ' 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:11.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.846 --rc genhtml_branch_coverage=1 00:17:11.846 --rc genhtml_function_coverage=1 00:17:11.846 --rc genhtml_legend=1 00:17:11.846 --rc geninfo_all_blocks=1 00:17:11.846 --rc geninfo_unexecuted_blocks=1 00:17:11.846 00:17:11.846 ' 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.846 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:11.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=869ea2ad-2694-485e-b950-acfbbee75c59 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=37882f33-fdc2-4dbd-a86e-5ab48830fc6b 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a480616f-3505-410b-a152-82336d9fed75 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:11.847 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:18.420 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:18.420 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:18.420 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:18.421 Found net devices under 0000:af:00.0: cvl_0_0 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:18.421 Found net devices under 0000:af:00.1: cvl_0_1 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:18.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:17:18.421 00:17:18.421 --- 10.0.0.2 ping statistics --- 00:17:18.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.421 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:17:18.421 00:17:18.421 --- 10.0.0.1 ping statistics --- 00:17:18.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.421 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=288572 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 288572 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 288572 ']' 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:18.421 [2024-12-13 05:33:17.744190] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:18.421 [2024-12-13 05:33:17.744237] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.421 [2024-12-13 05:33:17.824182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.421 [2024-12-13 05:33:17.845977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.421 [2024-12-13 05:33:17.846012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.421 [2024-12-13 05:33:17.846019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.421 [2024-12-13 05:33:17.846025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.421 [2024-12-13 05:33:17.846030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.421 [2024-12-13 05:33:17.846513] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.421 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:18.421 [2024-12-13 05:33:18.141196] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.421 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:18.421 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:18.421 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:18.421 Malloc1 00:17:18.421 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:18.680 Malloc2 00:17:18.680 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:18.939 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:19.197 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.197 [2024-12-13 05:33:19.141738] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.197 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:19.197 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a480616f-3505-410b-a152-82336d9fed75 -a 10.0.0.2 -s 4420 -i 4 00:17:19.456 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:19.456 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:19.456 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:19.456 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:19.456 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:21.360 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:21.360 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:21.360 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:21.360 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:21.360 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:21.360 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:21.361 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:21.361 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:21.361 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:21.361 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:21.361 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:21.361 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.361 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:21.361 [ 0]:0x1 00:17:21.361 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.361 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:21.620 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=406a08248ab346a3b87e216f1147b2cd 00:17:21.620 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 406a08248ab346a3b87e216f1147b2cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.620 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:21.620 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:21.620 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.620 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:21.620 [ 0]:0x1 00:17:21.620 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:21.620 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.620 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=406a08248ab346a3b87e216f1147b2cd 00:17:21.620 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 406a08248ab346a3b87e216f1147b2cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.620 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:21.620 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:21.620 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.879 [ 1]:0x2 00:17:21.879 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:21.879 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.879 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f27be9b092c44f68927bd4a7e96ef20f 00:17:21.879 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f27be9b092c44f68927bd4a7e96ef20f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.879 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:21.879 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:22.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.139 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:22.139 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:22.398 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:22.398 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a480616f-3505-410b-a152-82336d9fed75 -a 10.0.0.2 -s 4420 -i 4 00:17:22.657 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:22.657 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:22.657 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.657 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:22.657 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:22.657 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:24.564 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:24.564 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:24.564 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.564 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:24.564 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.564 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:24.564 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:24.564 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:24.824 [ 0]:0x2 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f27be9b092c44f68927bd4a7e96ef20f 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f27be9b092c44f68927bd4a7e96ef20f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.824 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:25.084 [ 0]:0x1 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=406a08248ab346a3b87e216f1147b2cd 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 406a08248ab346a3b87e216f1147b2cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:25.084 [ 1]:0x2 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f27be9b092c44f68927bd4a7e96ef20f 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f27be9b092c44f68927bd4a7e96ef20f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.084 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.344 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:25.345 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.345 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.345 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.345 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:25.345 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.345 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:25.345 [ 0]:0x2 00:17:25.345 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:25.345 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.345 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f27be9b092c44f68927bd4a7e96ef20f 00:17:25.345 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f27be9b092c44f68927bd4a7e96ef20f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.345 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:25.345 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.345 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:25.603 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:25.603 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a480616f-3505-410b-a152-82336d9fed75 -a 10.0.0.2 -s 4420 -i 4 00:17:25.863 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:25.863 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:25.863 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:25.863 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:25.863 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:25.863 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:27.770 [ 0]:0x1 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:27.770 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:28.029 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=406a08248ab346a3b87e216f1147b2cd 00:17:28.029 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 406a08248ab346a3b87e216f1147b2cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:28.029 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:28.029 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:28.029 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:28.029 [ 1]:0x2 00:17:28.029 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:28.029 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:28.029 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f27be9b092c44f68927bd4a7e96ef20f 00:17:28.029 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f27be9b092c44f68927bd4a7e96ef20f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:28.029 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:28.029 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:28.029 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:28.029 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:28.029 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:28.029 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.029 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:28.029 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.029 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:28.029 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:28.030 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:28.289 [ 0]:0x2 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:28.289 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f27be9b092c44f68927bd4a7e96ef20f 00:17:28.290 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f27be9b092c44f68927bd4a7e96ef20f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:28.290 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:28.290 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:28.290 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:28.290 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.290 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.290 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.290 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.290 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.290 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.290 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.290 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:28.290 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:28.550 [2024-12-13 05:33:28.367582] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:28.550 request: 00:17:28.550 { 00:17:28.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.550 "nsid": 2, 00:17:28.550 "host": "nqn.2016-06.io.spdk:host1", 00:17:28.550 "method": "nvmf_ns_remove_host", 00:17:28.550 "req_id": 1 00:17:28.550 } 00:17:28.550 Got JSON-RPC error response 00:17:28.550 response: 00:17:28.550 { 00:17:28.550 "code": -32602, 00:17:28.550 "message": "Invalid parameters" 00:17:28.550 } 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:28.550 [ 0]:0x2 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f27be9b092c44f68927bd4a7e96ef20f 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f27be9b092c44f68927bd4a7e96ef20f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=290375 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 290375 /var/tmp/host.sock 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 290375 ']' 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:28.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.550 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:28.810 [2024-12-13 05:33:28.601033] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:28.810 [2024-12-13 05:33:28.601076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290375 ] 00:17:28.810 [2024-12-13 05:33:28.675570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.810 [2024-12-13 05:33:28.698069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.070 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.070 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:29.070 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:29.330 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:29.330 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 869ea2ad-2694-485e-b950-acfbbee75c59 00:17:29.330 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:29.330 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 869EA2AD2694485EB950ACFBBEE75C59 -i 00:17:29.590 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 37882f33-fdc2-4dbd-a86e-5ab48830fc6b 00:17:29.590 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:29.590 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 37882F33FDC24DBDA86E5AB48830FC6B -i 00:17:29.849 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:30.108 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:30.108 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:30.108 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:30.677 nvme0n1 00:17:30.677 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:30.677 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:30.937 nvme1n2 00:17:30.937 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:30.937 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:30.937 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:30.937 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:30.937 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:31.196 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:31.196 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:31.196 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:31.196 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:31.455 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 869ea2ad-2694-485e-b950-acfbbee75c59 == \8\6\9\e\a\2\a\d\-\2\6\9\4\-\4\8\5\e\-\b\9\5\0\-\a\c\f\b\b\e\e\7\5\c\5\9 ]] 00:17:31.455 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:31.455 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:31.455 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:31.715 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 37882f33-fdc2-4dbd-a86e-5ab48830fc6b == \3\7\8\8\2\f\3\3\-\f\d\c\2\-\4\d\b\d\-\a\8\6\e\-\5\a\b\4\8\8\3\0\f\c\6\b ]] 00:17:31.715 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.715 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 869ea2ad-2694-485e-b950-acfbbee75c59 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 869EA2AD2694485EB950ACFBBEE75C59 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 869EA2AD2694485EB950ACFBBEE75C59 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:31.974 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 869EA2AD2694485EB950ACFBBEE75C59 00:17:32.234 [2024-12-13 05:33:32.057716] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:32.234 [2024-12-13 05:33:32.057744] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:32.234 [2024-12-13 05:33:32.057752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.234 request: 00:17:32.234 { 00:17:32.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.234 "namespace": { 00:17:32.234 "bdev_name": "invalid", 00:17:32.234 "nsid": 1, 00:17:32.234 "nguid": "869EA2AD2694485EB950ACFBBEE75C59", 00:17:32.234 "no_auto_visible": false, 00:17:32.234 "hide_metadata": false 00:17:32.234 }, 00:17:32.234 "method": "nvmf_subsystem_add_ns", 00:17:32.234 "req_id": 1 00:17:32.234 } 00:17:32.234 Got JSON-RPC error response 00:17:32.234 response: 00:17:32.234 { 00:17:32.234 "code": -32602, 00:17:32.234 "message": "Invalid parameters" 00:17:32.234 } 00:17:32.234 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:32.234 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.234 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.234 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.234 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 869ea2ad-2694-485e-b950-acfbbee75c59 00:17:32.234 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:32.234 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 869EA2AD2694485EB950ACFBBEE75C59 -i 00:17:32.492 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:34.417 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:34.417 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:34.417 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:34.676 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:34.676 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 290375 00:17:34.676 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 290375 ']' 00:17:34.676 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 290375 00:17:34.676 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:34.676 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.676 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290375 00:17:34.676 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:34.676 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:34.676 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290375' 00:17:34.676 killing process with pid 290375 00:17:34.676 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 290375 00:17:34.676 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 290375 00:17:34.936 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:35.196 rmmod nvme_tcp 00:17:35.196 rmmod nvme_fabrics 00:17:35.196 rmmod nvme_keyring 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 288572 ']' 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 288572 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 288572 ']' 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 288572 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288572 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288572' 00:17:35.196 killing process with pid 288572 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 288572 00:17:35.196 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 288572 00:17:35.456 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:35.456 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:35.456 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:35.456 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:35.456 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:35.456 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:35.456 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:35.456 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.456 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:35.456 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.456 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.456 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.997 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:37.997 00:17:37.997 real 0m25.825s 00:17:37.997 user 0m30.983s 00:17:37.997 sys 0m6.943s 00:17:37.997 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.997 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:37.997 ************************************ 00:17:37.997 END TEST nvmf_ns_masking 00:17:37.997 ************************************ 00:17:37.997 05:33:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:37.997 05:33:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:37.997 05:33:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:37.997 05:33:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.997 05:33:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.997 ************************************ 00:17:37.997 START TEST nvmf_nvme_cli 00:17:37.997 ************************************ 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:37.998 * Looking for test storage... 00:17:37.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:37.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.998 --rc genhtml_branch_coverage=1 00:17:37.998 --rc genhtml_function_coverage=1 00:17:37.998 --rc genhtml_legend=1 00:17:37.998 --rc geninfo_all_blocks=1 00:17:37.998 --rc geninfo_unexecuted_blocks=1 00:17:37.998 00:17:37.998 ' 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:37.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.998 --rc genhtml_branch_coverage=1 00:17:37.998 --rc genhtml_function_coverage=1 00:17:37.998 --rc genhtml_legend=1 00:17:37.998 --rc geninfo_all_blocks=1 00:17:37.998 --rc geninfo_unexecuted_blocks=1 00:17:37.998 00:17:37.998 ' 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:37.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.998 --rc genhtml_branch_coverage=1 00:17:37.998 --rc genhtml_function_coverage=1 00:17:37.998 --rc genhtml_legend=1 00:17:37.998 --rc geninfo_all_blocks=1 00:17:37.998 --rc geninfo_unexecuted_blocks=1 00:17:37.998 00:17:37.998 ' 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:37.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.998 --rc genhtml_branch_coverage=1 00:17:37.998 --rc genhtml_function_coverage=1 00:17:37.998 --rc genhtml_legend=1 00:17:37.998 --rc geninfo_all_blocks=1 00:17:37.998 --rc geninfo_unexecuted_blocks=1 00:17:37.998 00:17:37.998 ' 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.998 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:37.999 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:44.577 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:44.578 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:44.578 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:44.578 Found net devices under 0000:af:00.0: cvl_0_0 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:44.578 Found net devices under 0000:af:00.1: cvl_0_1 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:44.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:17:44.578 00:17:44.578 --- 10.0.0.2 ping statistics --- 00:17:44.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.578 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:44.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:17:44.578 00:17:44.578 --- 10.0.0.1 ping statistics --- 00:17:44.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.578 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.578 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=294936 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 294936 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 294936 ']' 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 [2024-12-13 05:33:43.675256] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:44.579 [2024-12-13 05:33:43.675297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.579 [2024-12-13 05:33:43.753739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.579 [2024-12-13 05:33:43.777744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.579 [2024-12-13 05:33:43.777781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.579 [2024-12-13 05:33:43.777789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.579 [2024-12-13 05:33:43.777794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.579 [2024-12-13 05:33:43.777799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.579 [2024-12-13 05:33:43.779100] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.579 [2024-12-13 05:33:43.779209] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.579 [2024-12-13 05:33:43.779317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.579 [2024-12-13 05:33:43.779318] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 [2024-12-13 05:33:43.914975] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 Malloc0 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 Malloc1 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.579 05:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 [2024-12-13 05:33:44.004369] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:17:44.579 00:17:44.579 Discovery Log Number of Records 2, Generation counter 2 00:17:44.579 =====Discovery Log Entry 0====== 00:17:44.579 trtype: tcp 00:17:44.579 adrfam: ipv4 00:17:44.579 subtype: current discovery subsystem 00:17:44.579 treq: not required 00:17:44.579 portid: 0 00:17:44.579 trsvcid: 4420 00:17:44.579 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:44.579 traddr: 10.0.0.2 00:17:44.579 eflags: explicit discovery connections, duplicate discovery information 00:17:44.579 sectype: none 00:17:44.579 =====Discovery Log Entry 1====== 00:17:44.579 trtype: tcp 00:17:44.579 adrfam: ipv4 00:17:44.579 subtype: nvme subsystem 00:17:44.579 treq: not required 00:17:44.579 portid: 0 00:17:44.579 trsvcid: 4420 00:17:44.579 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:44.579 traddr: 10.0.0.2 00:17:44.579 eflags: none 00:17:44.579 sectype: none 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:44.579 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:45.518 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:45.518 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:45.518 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:45.518 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:45.518 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:45.518 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:47.425 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:47.425 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:47.425 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:47.425 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:47.425 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:47.425 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:47.425 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:47.425 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:47.425 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:47.425 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:47.685 /dev/nvme0n2 ]] 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:47.685 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.945 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:47.945 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:47.945 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:47.945 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.945 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:47.945 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.945 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:47.945 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:47.945 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.945 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.945 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:48.205 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.205 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:48.205 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:48.205 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:48.205 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:48.205 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:48.205 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:48.205 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:48.205 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:48.205 rmmod nvme_tcp 00:17:48.205 rmmod nvme_fabrics 00:17:48.205 rmmod nvme_keyring 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 294936 ']' 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 294936 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 294936 ']' 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 294936 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 294936 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 294936' 00:17:48.205 killing process with pid 294936 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 294936 00:17:48.205 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 294936 00:17:48.466 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:48.466 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:48.466 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:48.466 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:48.466 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:48.466 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:48.466 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:48.466 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:48.466 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:48.466 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.466 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.466 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.375 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:50.375 00:17:50.375 real 0m12.865s 00:17:50.375 user 0m19.718s 00:17:50.375 sys 0m4.988s 00:17:50.375 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.375 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:50.375 ************************************ 00:17:50.375 END TEST nvmf_nvme_cli 00:17:50.375 ************************************ 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:50.636 ************************************ 00:17:50.636 START TEST nvmf_vfio_user 00:17:50.636 ************************************ 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:50.636 * Looking for test storage... 00:17:50.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.636 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:50.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.636 --rc genhtml_branch_coverage=1 00:17:50.636 --rc genhtml_function_coverage=1 00:17:50.636 --rc genhtml_legend=1 00:17:50.636 --rc geninfo_all_blocks=1 00:17:50.637 --rc geninfo_unexecuted_blocks=1 00:17:50.637 00:17:50.637 ' 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:50.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.637 --rc genhtml_branch_coverage=1 00:17:50.637 --rc genhtml_function_coverage=1 00:17:50.637 --rc genhtml_legend=1 00:17:50.637 --rc geninfo_all_blocks=1 00:17:50.637 --rc geninfo_unexecuted_blocks=1 00:17:50.637 00:17:50.637 ' 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:50.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.637 --rc genhtml_branch_coverage=1 00:17:50.637 --rc genhtml_function_coverage=1 00:17:50.637 --rc genhtml_legend=1 00:17:50.637 --rc geninfo_all_blocks=1 00:17:50.637 --rc geninfo_unexecuted_blocks=1 00:17:50.637 00:17:50.637 ' 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:50.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.637 --rc genhtml_branch_coverage=1 00:17:50.637 --rc genhtml_function_coverage=1 00:17:50.637 --rc genhtml_legend=1 00:17:50.637 --rc geninfo_all_blocks=1 00:17:50.637 --rc geninfo_unexecuted_blocks=1 00:17:50.637 00:17:50.637 ' 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:50.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=296194 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 296194' 00:17:50.637 Process pid: 296194 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 296194 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 296194 ']' 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.637 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.638 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.638 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:50.898 [2024-12-13 05:33:50.692332] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:50.898 [2024-12-13 05:33:50.692392] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.898 [2024-12-13 05:33:50.750834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:50.898 [2024-12-13 05:33:50.774355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.898 [2024-12-13 05:33:50.774391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.898 [2024-12-13 05:33:50.774398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.898 [2024-12-13 05:33:50.774404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.898 [2024-12-13 05:33:50.774410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.898 [2024-12-13 05:33:50.778467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.898 [2024-12-13 05:33:50.778505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.898 [2024-12-13 05:33:50.778611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.898 [2024-12-13 05:33:50.778612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:50.898 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.898 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:50.898 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:52.277 05:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:52.277 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:52.277 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:52.277 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:52.277 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:52.277 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:52.536 Malloc1 00:17:52.536 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:52.536 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:52.795 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:53.053 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:53.053 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:53.053 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:53.312 Malloc2 00:17:53.312 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:53.572 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:53.572 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:53.833 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:53.833 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:53.833 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:53.833 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:53.833 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:53.833 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:53.834 [2024-12-13 05:33:53.759416] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:53.834 [2024-12-13 05:33:53.759469] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid296770 ] 00:17:53.834 [2024-12-13 05:33:53.799803] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:53.834 [2024-12-13 05:33:53.808861] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:53.834 [2024-12-13 05:33:53.808881] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6b371b6000 00:17:53.834 [2024-12-13 05:33:53.809859] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:53.834 [2024-12-13 05:33:53.810854] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:53.834 [2024-12-13 05:33:53.811861] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:53.834 [2024-12-13 05:33:53.812867] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:53.834 [2024-12-13 05:33:53.813867] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:53.834 [2024-12-13 05:33:53.814874] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:53.834 [2024-12-13 05:33:53.815879] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:53.834 [2024-12-13 05:33:53.816884] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:53.834 [2024-12-13 05:33:53.817894] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:53.834 [2024-12-13 05:33:53.817903] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6b35ebf000 00:17:53.834 [2024-12-13 05:33:53.818810] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:53.834 [2024-12-13 05:33:53.828208] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:53.834 [2024-12-13 05:33:53.828231] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:53.834 [2024-12-13 05:33:53.832992] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:53.834 [2024-12-13 05:33:53.833028] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:53.834 [2024-12-13 05:33:53.833099] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:53.834 [2024-12-13 05:33:53.833113] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:53.834 [2024-12-13 05:33:53.833118] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:53.834 [2024-12-13 05:33:53.833992] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:53.834 [2024-12-13 05:33:53.834001] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:53.834 [2024-12-13 05:33:53.834008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:53.834 [2024-12-13 05:33:53.834996] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:53.834 [2024-12-13 05:33:53.835004] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:53.834 [2024-12-13 05:33:53.835010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:53.834 [2024-12-13 05:33:53.835998] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:53.834 [2024-12-13 05:33:53.836006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:53.834 [2024-12-13 05:33:53.837008] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:53.834 [2024-12-13 05:33:53.837016] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:53.834 [2024-12-13 05:33:53.837020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:53.834 [2024-12-13 05:33:53.837026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:53.834 [2024-12-13 05:33:53.837137] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:53.834 [2024-12-13 05:33:53.837141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:53.834 [2024-12-13 05:33:53.837146] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:53.834 [2024-12-13 05:33:53.838017] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:53.834 [2024-12-13 05:33:53.839019] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:53.834 [2024-12-13 05:33:53.840026] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:53.834 [2024-12-13 05:33:53.841029] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:53.834 [2024-12-13 05:33:53.841104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:53.834 [2024-12-13 05:33:53.842039] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:53.834 [2024-12-13 05:33:53.842046] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:53.834 [2024-12-13 05:33:53.842050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:53.834 [2024-12-13 05:33:53.842067] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:53.834 [2024-12-13 05:33:53.842073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:53.834 [2024-12-13 05:33:53.842084] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:53.834 [2024-12-13 05:33:53.842089] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:53.834 [2024-12-13 05:33:53.842092] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:53.834 [2024-12-13 05:33:53.842104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:53.834 [2024-12-13 05:33:53.842151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:53.834 [2024-12-13 05:33:53.842159] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:53.834 [2024-12-13 05:33:53.842163] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:53.834 [2024-12-13 05:33:53.842167] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:53.834 [2024-12-13 05:33:53.842171] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:53.834 [2024-12-13 05:33:53.842175] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:53.834 [2024-12-13 05:33:53.842179] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:53.834 [2024-12-13 05:33:53.842183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:53.834 [2024-12-13 05:33:53.842193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:53.834 [2024-12-13 05:33:53.842203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-12-13 05:33:53.842217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:53.834 [2024-12-13 05:33:53.842226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.834 [2024-12-13 05:33:53.842233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.834 [2024-12-13 05:33:53.842240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.834 [2024-12-13 05:33:53.842247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.834 [2024-12-13 05:33:53.842251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:53.834 [2024-12-13 05:33:53.842259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:53.834 [2024-12-13 05:33:53.842267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-12-13 05:33:53.842278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:53.834 [2024-12-13 05:33:53.842283] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:53.835 [2024-12-13 05:33:53.842287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:53.835 [2024-12-13 05:33:53.842320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:53.835 [2024-12-13 05:33:53.842366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842381] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:53.835 [2024-12-13 05:33:53.842385] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:53.835 [2024-12-13 05:33:53.842388] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:53.835 [2024-12-13 05:33:53.842393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:53.835 [2024-12-13 05:33:53.842410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:53.835 [2024-12-13 05:33:53.842417] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:53.835 [2024-12-13 05:33:53.842428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842440] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:53.835 [2024-12-13 05:33:53.842444] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:53.835 [2024-12-13 05:33:53.842452] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:53.835 [2024-12-13 05:33:53.842458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:53.835 [2024-12-13 05:33:53.842482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:53.835 [2024-12-13 05:33:53.842493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842506] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:53.835 [2024-12-13 05:33:53.842509] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:53.835 [2024-12-13 05:33:53.842512] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:53.835 [2024-12-13 05:33:53.842518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:53.835 [2024-12-13 05:33:53.842529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:53.835 [2024-12-13 05:33:53.842536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842553] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842566] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:53.835 [2024-12-13 05:33:53.842570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:53.835 [2024-12-13 05:33:53.842575] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:53.835 [2024-12-13 05:33:53.842590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:53.835 [2024-12-13 05:33:53.842599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:53.835 [2024-12-13 05:33:53.842610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:53.835 [2024-12-13 05:33:53.842618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:53.835 [2024-12-13 05:33:53.842628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:53.835 [2024-12-13 05:33:53.842640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:53.835 [2024-12-13 05:33:53.842649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:53.835 [2024-12-13 05:33:53.842661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:53.835 [2024-12-13 05:33:53.842672] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:53.835 [2024-12-13 05:33:53.842676] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:53.835 [2024-12-13 05:33:53.842679] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:53.835 [2024-12-13 05:33:53.842682] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:53.835 [2024-12-13 05:33:53.842684] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:53.835 [2024-12-13 05:33:53.842690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:53.835 [2024-12-13 05:33:53.842696] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:53.835 [2024-12-13 05:33:53.842699] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:53.835 [2024-12-13 05:33:53.842702] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:53.835 [2024-12-13 05:33:53.842708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:53.835 [2024-12-13 05:33:53.842713] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:53.835 [2024-12-13 05:33:53.842717] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:53.835 [2024-12-13 05:33:53.842720] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:53.835 [2024-12-13 05:33:53.842725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:53.835 [2024-12-13 05:33:53.842732] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:53.835 [2024-12-13 05:33:53.842735] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:53.835 [2024-12-13 05:33:53.842738] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:53.835 [2024-12-13 05:33:53.842744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:53.835 [2024-12-13 05:33:53.842750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:53.835 [2024-12-13 05:33:53.842761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:53.835 [2024-12-13 05:33:53.842770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:53.835 [2024-12-13 05:33:53.842776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:53.835 ===================================================== 00:17:53.835 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:53.835 ===================================================== 00:17:53.835 Controller Capabilities/Features 00:17:53.835 ================================ 00:17:53.835 Vendor ID: 4e58 00:17:53.835 Subsystem Vendor ID: 4e58 00:17:53.835 Serial Number: SPDK1 00:17:53.835 Model Number: SPDK bdev Controller 00:17:53.835 Firmware Version: 25.01 00:17:53.835 Recommended Arb Burst: 6 00:17:53.835 IEEE OUI Identifier: 8d 6b 50 00:17:53.835 Multi-path I/O 00:17:53.835 May have multiple subsystem ports: Yes 00:17:53.835 May have multiple controllers: Yes 00:17:53.835 Associated with SR-IOV VF: No 00:17:53.835 Max Data Transfer Size: 131072 00:17:53.835 Max Number of Namespaces: 32 00:17:53.835 Max Number of I/O Queues: 127 00:17:53.835 NVMe Specification Version (VS): 1.3 00:17:53.835 NVMe Specification Version (Identify): 1.3 00:17:53.835 Maximum Queue Entries: 256 00:17:53.835 Contiguous Queues Required: Yes 00:17:53.835 Arbitration Mechanisms Supported 00:17:53.835 Weighted Round Robin: Not Supported 00:17:53.836 Vendor Specific: Not Supported 00:17:53.836 Reset Timeout: 15000 ms 00:17:53.836 Doorbell Stride: 4 bytes 00:17:53.836 NVM Subsystem Reset: Not Supported 00:17:53.836 Command Sets Supported 00:17:53.836 NVM Command Set: Supported 00:17:53.836 Boot Partition: Not Supported 00:17:53.836 Memory Page Size Minimum: 4096 bytes 00:17:53.836 Memory Page Size Maximum: 4096 bytes 00:17:53.836 Persistent Memory Region: Not Supported 00:17:53.836 Optional Asynchronous Events Supported 00:17:53.836 Namespace Attribute Notices: Supported 00:17:53.836 Firmware Activation Notices: Not Supported 00:17:53.836 ANA Change Notices: Not Supported 00:17:53.836 PLE Aggregate Log Change Notices: Not Supported 00:17:53.836 LBA Status Info Alert Notices: Not Supported 00:17:53.836 EGE Aggregate Log Change Notices: Not Supported 00:17:53.836 Normal NVM Subsystem Shutdown event: Not Supported 00:17:53.836 Zone Descriptor Change Notices: Not Supported 00:17:53.836 Discovery Log Change Notices: Not Supported 00:17:53.836 Controller Attributes 00:17:53.836 128-bit Host Identifier: Supported 00:17:53.836 Non-Operational Permissive Mode: Not Supported 00:17:53.836 NVM Sets: Not Supported 00:17:53.836 Read Recovery Levels: Not Supported 00:17:53.836 Endurance Groups: Not Supported 00:17:53.836 Predictable Latency Mode: Not Supported 00:17:53.836 Traffic Based Keep ALive: Not Supported 00:17:53.836 Namespace Granularity: Not Supported 00:17:53.836 SQ Associations: Not Supported 00:17:53.836 UUID List: Not Supported 00:17:53.836 Multi-Domain Subsystem: Not Supported 00:17:53.836 Fixed Capacity Management: Not Supported 00:17:53.836 Variable Capacity Management: Not Supported 00:17:53.836 Delete Endurance Group: Not Supported 00:17:53.836 Delete NVM Set: Not Supported 00:17:53.836 Extended LBA Formats Supported: Not Supported 00:17:53.836 Flexible Data Placement Supported: Not Supported 00:17:53.836 00:17:53.836 Controller Memory Buffer Support 00:17:53.836 ================================ 00:17:53.836 Supported: No 00:17:53.836 00:17:53.836 Persistent Memory Region Support 00:17:53.836 ================================ 00:17:53.836 Supported: No 00:17:53.836 00:17:53.836 Admin Command Set Attributes 00:17:53.836 ============================ 00:17:53.836 Security Send/Receive: Not Supported 00:17:53.836 Format NVM: Not Supported 00:17:53.836 Firmware Activate/Download: Not Supported 00:17:53.836 Namespace Management: Not Supported 00:17:53.836 Device Self-Test: Not Supported 00:17:53.836 Directives: Not Supported 00:17:53.836 NVMe-MI: Not Supported 00:17:53.836 Virtualization Management: Not Supported 00:17:53.836 Doorbell Buffer Config: Not Supported 00:17:53.836 Get LBA Status Capability: Not Supported 00:17:53.836 Command & Feature Lockdown Capability: Not Supported 00:17:53.836 Abort Command Limit: 4 00:17:53.836 Async Event Request Limit: 4 00:17:53.836 Number of Firmware Slots: N/A 00:17:53.836 Firmware Slot 1 Read-Only: N/A 00:17:53.836 Firmware Activation Without Reset: N/A 00:17:53.836 Multiple Update Detection Support: N/A 00:17:53.836 Firmware Update Granularity: No Information Provided 00:17:53.836 Per-Namespace SMART Log: No 00:17:53.836 Asymmetric Namespace Access Log Page: Not Supported 00:17:53.836 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:53.836 Command Effects Log Page: Supported 00:17:53.836 Get Log Page Extended Data: Supported 00:17:53.836 Telemetry Log Pages: Not Supported 00:17:53.836 Persistent Event Log Pages: Not Supported 00:17:53.836 Supported Log Pages Log Page: May Support 00:17:53.836 Commands Supported & Effects Log Page: Not Supported 00:17:53.836 Feature Identifiers & Effects Log Page:May Support 00:17:53.836 NVMe-MI Commands & Effects Log Page: May Support 00:17:53.836 Data Area 4 for Telemetry Log: Not Supported 00:17:53.836 Error Log Page Entries Supported: 128 00:17:53.836 Keep Alive: Supported 00:17:53.836 Keep Alive Granularity: 10000 ms 00:17:53.836 00:17:53.836 NVM Command Set Attributes 00:17:53.836 ========================== 00:17:53.836 Submission Queue Entry Size 00:17:53.836 Max: 64 00:17:53.836 Min: 64 00:17:53.836 Completion Queue Entry Size 00:17:53.836 Max: 16 00:17:53.836 Min: 16 00:17:53.836 Number of Namespaces: 32 00:17:53.836 Compare Command: Supported 00:17:53.836 Write Uncorrectable Command: Not Supported 00:17:53.836 Dataset Management Command: Supported 00:17:53.836 Write Zeroes Command: Supported 00:17:53.836 Set Features Save Field: Not Supported 00:17:53.836 Reservations: Not Supported 00:17:53.836 Timestamp: Not Supported 00:17:53.836 Copy: Supported 00:17:53.836 Volatile Write Cache: Present 00:17:53.836 Atomic Write Unit (Normal): 1 00:17:53.836 Atomic Write Unit (PFail): 1 00:17:53.836 Atomic Compare & Write Unit: 1 00:17:53.836 Fused Compare & Write: Supported 00:17:53.836 Scatter-Gather List 00:17:53.836 SGL Command Set: Supported (Dword aligned) 00:17:53.836 SGL Keyed: Not Supported 00:17:53.836 SGL Bit Bucket Descriptor: Not Supported 00:17:53.836 SGL Metadata Pointer: Not Supported 00:17:53.836 Oversized SGL: Not Supported 00:17:53.836 SGL Metadata Address: Not Supported 00:17:53.836 SGL Offset: Not Supported 00:17:53.836 Transport SGL Data Block: Not Supported 00:17:53.836 Replay Protected Memory Block: Not Supported 00:17:53.836 00:17:53.836 Firmware Slot Information 00:17:53.836 ========================= 00:17:53.836 Active slot: 1 00:17:53.836 Slot 1 Firmware Revision: 25.01 00:17:53.836 00:17:53.836 00:17:53.836 Commands Supported and Effects 00:17:53.836 ============================== 00:17:53.836 Admin Commands 00:17:53.836 -------------- 00:17:53.836 Get Log Page (02h): Supported 00:17:53.836 Identify (06h): Supported 00:17:53.836 Abort (08h): Supported 00:17:53.836 Set Features (09h): Supported 00:17:53.836 Get Features (0Ah): Supported 00:17:53.836 Asynchronous Event Request (0Ch): Supported 00:17:53.836 Keep Alive (18h): Supported 00:17:53.836 I/O Commands 00:17:53.836 ------------ 00:17:53.836 Flush (00h): Supported LBA-Change 00:17:53.836 Write (01h): Supported LBA-Change 00:17:53.836 Read (02h): Supported 00:17:53.836 Compare (05h): Supported 00:17:53.836 Write Zeroes (08h): Supported LBA-Change 00:17:53.836 Dataset Management (09h): Supported LBA-Change 00:17:53.836 Copy (19h): Supported LBA-Change 00:17:53.836 00:17:53.836 Error Log 00:17:53.836 ========= 00:17:53.836 00:17:53.836 Arbitration 00:17:53.836 =========== 00:17:53.836 Arbitration Burst: 1 00:17:53.836 00:17:53.836 Power Management 00:17:53.836 ================ 00:17:53.836 Number of Power States: 1 00:17:53.836 Current Power State: Power State #0 00:17:53.836 Power State #0: 00:17:53.836 Max Power: 0.00 W 00:17:53.836 Non-Operational State: Operational 00:17:53.836 Entry Latency: Not Reported 00:17:53.836 Exit Latency: Not Reported 00:17:53.836 Relative Read Throughput: 0 00:17:53.836 Relative Read Latency: 0 00:17:53.836 Relative Write Throughput: 0 00:17:53.836 Relative Write Latency: 0 00:17:53.836 Idle Power: Not Reported 00:17:53.836 Active Power: Not Reported 00:17:53.836 Non-Operational Permissive Mode: Not Supported 00:17:53.836 00:17:53.836 Health Information 00:17:53.836 ================== 00:17:53.836 Critical Warnings: 00:17:53.836 Available Spare Space: OK 00:17:53.836 Temperature: OK 00:17:53.836 Device Reliability: OK 00:17:53.836 Read Only: No 00:17:53.837 Volatile Memory Backup: OK 00:17:53.837 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:53.837 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:53.837 Available Spare: 0% 00:17:53.837 Available Sp[2024-12-13 05:33:53.842858] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:53.837 [2024-12-13 05:33:53.842869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:53.837 [2024-12-13 05:33:53.842892] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:53.837 [2024-12-13 05:33:53.842900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.837 [2024-12-13 05:33:53.842905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.837 [2024-12-13 05:33:53.842911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.837 [2024-12-13 05:33:53.842916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.837 [2024-12-13 05:33:53.846459] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:53.837 [2024-12-13 05:33:53.846469] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:53.837 [2024-12-13 05:33:53.847075] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:53.837 [2024-12-13 05:33:53.847123] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:53.837 [2024-12-13 05:33:53.847129] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:54.097 [2024-12-13 05:33:53.848074] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:54.097 [2024-12-13 05:33:53.848084] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:54.097 [2024-12-13 05:33:53.848133] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:54.097 [2024-12-13 05:33:53.849098] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:54.097 are Threshold: 0% 00:17:54.097 Life Percentage Used: 0% 00:17:54.097 Data Units Read: 0 00:17:54.097 Data Units Written: 0 00:17:54.097 Host Read Commands: 0 00:17:54.097 Host Write Commands: 0 00:17:54.097 Controller Busy Time: 0 minutes 00:17:54.097 Power Cycles: 0 00:17:54.097 Power On Hours: 0 hours 00:17:54.097 Unsafe Shutdowns: 0 00:17:54.097 Unrecoverable Media Errors: 0 00:17:54.097 Lifetime Error Log Entries: 0 00:17:54.097 Warning Temperature Time: 0 minutes 00:17:54.097 Critical Temperature Time: 0 minutes 00:17:54.097 00:17:54.097 Number of Queues 00:17:54.097 ================ 00:17:54.097 Number of I/O Submission Queues: 127 00:17:54.097 Number of I/O Completion Queues: 127 00:17:54.097 00:17:54.097 Active Namespaces 00:17:54.097 ================= 00:17:54.097 Namespace ID:1 00:17:54.097 Error Recovery Timeout: Unlimited 00:17:54.097 Command Set Identifier: NVM (00h) 00:17:54.097 Deallocate: Supported 00:17:54.097 Deallocated/Unwritten Error: Not Supported 00:17:54.097 Deallocated Read Value: Unknown 00:17:54.097 Deallocate in Write Zeroes: Not Supported 00:17:54.097 Deallocated Guard Field: 0xFFFF 00:17:54.097 Flush: Supported 00:17:54.097 Reservation: Supported 00:17:54.097 Namespace Sharing Capabilities: Multiple Controllers 00:17:54.097 Size (in LBAs): 131072 (0GiB) 00:17:54.097 Capacity (in LBAs): 131072 (0GiB) 00:17:54.097 Utilization (in LBAs): 131072 (0GiB) 00:17:54.097 NGUID: 4820BDD694BC40BD8F2368E7E35AE8AD 00:17:54.097 UUID: 4820bdd6-94bc-40bd-8f23-68e7e35ae8ad 00:17:54.097 Thin Provisioning: Not Supported 00:17:54.097 Per-NS Atomic Units: Yes 00:17:54.097 Atomic Boundary Size (Normal): 0 00:17:54.097 Atomic Boundary Size (PFail): 0 00:17:54.097 Atomic Boundary Offset: 0 00:17:54.097 Maximum Single Source Range Length: 65535 00:17:54.097 Maximum Copy Length: 65535 00:17:54.097 Maximum Source Range Count: 1 00:17:54.097 NGUID/EUI64 Never Reused: No 00:17:54.097 Namespace Write Protected: No 00:17:54.097 Number of LBA Formats: 1 00:17:54.097 Current LBA Format: LBA Format #00 00:17:54.097 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:54.097 00:17:54.097 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:54.097 [2024-12-13 05:33:54.071252] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:59.376 Initializing NVMe Controllers 00:17:59.376 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:59.376 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:59.376 Initialization complete. Launching workers. 00:17:59.376 ======================================================== 00:17:59.376 Latency(us) 00:17:59.376 Device Information : IOPS MiB/s Average min max 00:17:59.376 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39944.79 156.03 3204.54 976.30 8181.95 00:17:59.376 ======================================================== 00:17:59.376 Total : 39944.79 156.03 3204.54 976.30 8181.95 00:17:59.376 00:17:59.376 [2024-12-13 05:33:59.088804] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:59.376 05:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:59.376 [2024-12-13 05:33:59.323867] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:04.755 Initializing NVMe Controllers 00:18:04.755 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:04.755 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:04.755 Initialization complete. Launching workers. 00:18:04.755 ======================================================== 00:18:04.755 Latency(us) 00:18:04.755 Device Information : IOPS MiB/s Average min max 00:18:04.755 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.11 62.71 7978.42 4987.40 10973.67 00:18:04.755 ======================================================== 00:18:04.755 Total : 16054.11 62.71 7978.42 4987.40 10973.67 00:18:04.755 00:18:04.755 [2024-12-13 05:34:04.365024] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:04.755 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:04.755 [2024-12-13 05:34:04.568004] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:10.463 [2024-12-13 05:34:09.662869] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:10.463 Initializing NVMe Controllers 00:18:10.463 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:10.463 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:10.463 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:10.463 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:10.463 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:10.463 Initialization complete. Launching workers. 00:18:10.463 Starting thread on core 2 00:18:10.463 Starting thread on core 3 00:18:10.463 Starting thread on core 1 00:18:10.463 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:10.463 [2024-12-13 05:34:09.948418] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:13.123 [2024-12-13 05:34:13.009897] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:13.123 Initializing NVMe Controllers 00:18:13.123 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:13.123 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:13.123 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:13.123 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:13.123 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:13.123 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:13.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:13.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:13.123 Initialization complete. Launching workers. 00:18:13.123 Starting thread on core 1 with urgent priority queue 00:18:13.123 Starting thread on core 2 with urgent priority queue 00:18:13.123 Starting thread on core 3 with urgent priority queue 00:18:13.123 Starting thread on core 0 with urgent priority queue 00:18:13.123 SPDK bdev Controller (SPDK1 ) core 0: 8135.67 IO/s 12.29 secs/100000 ios 00:18:13.123 SPDK bdev Controller (SPDK1 ) core 1: 8097.67 IO/s 12.35 secs/100000 ios 00:18:13.123 SPDK bdev Controller (SPDK1 ) core 2: 8831.67 IO/s 11.32 secs/100000 ios 00:18:13.123 SPDK bdev Controller (SPDK1 ) core 3: 9890.00 IO/s 10.11 secs/100000 ios 00:18:13.123 ======================================================== 00:18:13.123 00:18:13.123 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:13.397 [2024-12-13 05:34:13.295876] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:13.397 Initializing NVMe Controllers 00:18:13.397 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:13.397 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:13.397 Namespace ID: 1 size: 0GB 00:18:13.397 Initialization complete. 00:18:13.397 INFO: using host memory buffer for IO 00:18:13.397 Hello world! 00:18:13.397 [2024-12-13 05:34:13.330075] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:13.397 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:13.672 [2024-12-13 05:34:13.613839] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:14.663 Initializing NVMe Controllers 00:18:14.663 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:14.663 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:14.663 Initialization complete. Launching workers. 00:18:14.663 submit (in ns) avg, min, max = 8302.3, 3171.4, 4000700.0 00:18:14.663 complete (in ns) avg, min, max = 20653.9, 1728.6, 4000041.0 00:18:14.663 00:18:14.663 Submit histogram 00:18:14.663 ================ 00:18:14.663 Range in us Cumulative Count 00:18:14.663 3.170 - 3.185: 0.0061% ( 1) 00:18:14.663 3.185 - 3.200: 0.0184% ( 2) 00:18:14.663 3.200 - 3.215: 0.1899% ( 28) 00:18:14.663 3.215 - 3.230: 1.2315% ( 170) 00:18:14.663 3.230 - 3.246: 2.9837% ( 286) 00:18:14.663 3.246 - 3.261: 5.2445% ( 369) 00:18:14.663 3.261 - 3.276: 9.7782% ( 740) 00:18:14.663 3.276 - 3.291: 16.2358% ( 1054) 00:18:14.663 3.291 - 3.307: 22.3625% ( 1000) 00:18:14.663 3.307 - 3.322: 29.1692% ( 1111) 00:18:14.663 3.322 - 3.337: 35.9576% ( 1108) 00:18:14.663 3.337 - 3.352: 41.0183% ( 826) 00:18:14.664 3.352 - 3.368: 45.5091% ( 733) 00:18:14.664 3.368 - 3.383: 51.0415% ( 903) 00:18:14.664 3.383 - 3.398: 55.6304% ( 749) 00:18:14.664 3.398 - 3.413: 60.0784% ( 726) 00:18:14.664 3.413 - 3.429: 66.6279% ( 1069) 00:18:14.664 3.429 - 3.444: 72.4482% ( 950) 00:18:14.664 3.444 - 3.459: 76.9391% ( 733) 00:18:14.664 3.459 - 3.474: 81.4851% ( 742) 00:18:14.664 3.474 - 3.490: 84.6097% ( 510) 00:18:14.664 3.490 - 3.505: 86.5213% ( 312) 00:18:14.664 3.505 - 3.520: 87.4525% ( 152) 00:18:14.664 3.520 - 3.535: 88.1081% ( 107) 00:18:14.664 3.535 - 3.550: 88.6288% ( 85) 00:18:14.664 3.550 - 3.566: 89.2231% ( 97) 00:18:14.664 3.566 - 3.581: 90.0441% ( 134) 00:18:14.664 3.581 - 3.596: 90.8712% ( 135) 00:18:14.664 3.596 - 3.611: 91.9005% ( 168) 00:18:14.664 3.611 - 3.627: 92.6847% ( 128) 00:18:14.664 3.627 - 3.642: 93.4567% ( 126) 00:18:14.664 3.642 - 3.657: 94.2777% ( 134) 00:18:14.664 3.657 - 3.672: 95.0251% ( 122) 00:18:14.664 3.672 - 3.688: 95.8338% ( 132) 00:18:14.664 3.688 - 3.703: 96.6671% ( 136) 00:18:14.664 3.703 - 3.718: 97.4758% ( 132) 00:18:14.664 3.718 - 3.733: 98.0578% ( 95) 00:18:14.664 3.733 - 3.749: 98.4990% ( 72) 00:18:14.664 3.749 - 3.764: 98.9033% ( 66) 00:18:14.664 3.764 - 3.779: 99.0687% ( 27) 00:18:14.664 3.779 - 3.794: 99.2403% ( 28) 00:18:14.664 3.794 - 3.810: 99.3567% ( 19) 00:18:14.664 3.810 - 3.825: 99.4670% ( 18) 00:18:14.664 3.825 - 3.840: 99.5528% ( 14) 00:18:14.664 3.840 - 3.855: 99.5773% ( 4) 00:18:14.664 3.855 - 3.870: 99.5895% ( 2) 00:18:14.664 3.870 - 3.886: 99.6018% ( 2) 00:18:14.664 3.886 - 3.901: 99.6201% ( 3) 00:18:14.664 3.901 - 3.931: 99.6324% ( 2) 00:18:14.664 5.120 - 5.150: 99.6385% ( 1) 00:18:14.664 5.150 - 5.181: 99.6447% ( 1) 00:18:14.664 5.425 - 5.455: 99.6508% ( 1) 00:18:14.664 5.455 - 5.486: 99.6569% ( 1) 00:18:14.664 5.486 - 5.516: 99.6630% ( 1) 00:18:14.664 5.547 - 5.577: 99.6692% ( 1) 00:18:14.664 5.577 - 5.608: 99.6753% ( 1) 00:18:14.664 5.638 - 5.669: 99.6814% ( 1) 00:18:14.664 5.699 - 5.730: 99.6875% ( 1) 00:18:14.664 5.730 - 5.760: 99.6937% ( 1) 00:18:14.664 5.760 - 5.790: 99.7059% ( 2) 00:18:14.664 5.790 - 5.821: 99.7120% ( 1) 00:18:14.664 5.882 - 5.912: 99.7182% ( 1) 00:18:14.664 6.004 - 6.034: 99.7243% ( 1) 00:18:14.664 6.156 - 6.187: 99.7304% ( 1) 00:18:14.664 6.217 - 6.248: 99.7366% ( 1) 00:18:14.664 6.339 - 6.370: 99.7427% ( 1) 00:18:14.664 6.370 - 6.400: 99.7488% ( 1) 00:18:14.664 6.400 - 6.430: 99.7549% ( 1) 00:18:14.664 6.552 - 6.583: 99.7611% ( 1) 00:18:14.664 6.613 - 6.644: 99.7672% ( 1) 00:18:14.664 6.735 - 6.766: 99.7733% ( 1) 00:18:14.664 6.888 - 6.918: 99.7794% ( 1) 00:18:14.664 7.010 - 7.040: 99.7856% ( 1) 00:18:14.664 7.131 - 7.162: 99.7917% ( 1) 00:18:14.664 7.192 - 7.223: 99.7978% ( 1) 00:18:14.664 7.314 - 7.345: 99.8039% ( 1) 00:18:14.664 7.345 - 7.375: 99.8101% ( 1) 00:18:14.664 7.436 - 7.467: 99.8162% ( 1) 00:18:14.664 7.497 - 7.528: 99.8223% ( 1) 00:18:14.664 [2024-12-13 05:34:14.635924] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:14.938 7.528 - 7.558: 99.8285% ( 1) 00:18:14.938 7.558 - 7.589: 99.8346% ( 1) 00:18:14.938 7.863 - 7.924: 99.8407% ( 1) 00:18:14.938 7.985 - 8.046: 99.8468% ( 1) 00:18:14.938 8.472 - 8.533: 99.8530% ( 1) 00:18:14.938 8.594 - 8.655: 99.8591% ( 1) 00:18:14.938 8.960 - 9.021: 99.8652% ( 1) 00:18:14.938 9.143 - 9.204: 99.8713% ( 1) 00:18:14.938 11.825 - 11.886: 99.8775% ( 1) 00:18:14.938 3994.575 - 4025.783: 100.0000% ( 20) 00:18:14.938 00:18:14.938 Complete histogram 00:18:14.938 ================== 00:18:14.938 Range in us Cumulative Count 00:18:14.938 1.722 - 1.730: 0.0061% ( 1) 00:18:14.938 1.737 - 1.745: 0.0184% ( 2) 00:18:14.938 1.752 - 1.760: 0.0306% ( 2) 00:18:14.938 1.760 - 1.768: 0.3982% ( 60) 00:18:14.938 1.768 - 1.775: 3.9701% ( 583) 00:18:14.938 1.775 - 1.783: 17.3569% ( 2185) 00:18:14.938 1.783 - 1.790: 34.8119% ( 2849) 00:18:14.938 1.790 - 1.798: 45.3866% ( 1726) 00:18:14.938 1.798 - 1.806: 49.2035% ( 623) 00:18:14.938 1.806 - 1.813: 51.5072% ( 376) 00:18:14.938 1.813 - 1.821: 53.8169% ( 377) 00:18:14.938 1.821 - 1.829: 60.5502% ( 1099) 00:18:14.938 1.829 - 1.836: 73.7226% ( 2150) 00:18:14.938 1.836 - 1.844: 85.8473% ( 1979) 00:18:14.938 1.844 - 1.851: 91.8147% ( 974) 00:18:14.938 1.851 - 1.859: 94.5472% ( 446) 00:18:14.938 1.859 - 1.867: 96.3362% ( 292) 00:18:14.938 1.867 - 1.874: 97.4635% ( 184) 00:18:14.938 1.874 - 1.882: 97.9108% ( 73) 00:18:14.938 1.882 - 1.890: 98.1252% ( 35) 00:18:14.938 1.890 - 1.897: 98.3397% ( 35) 00:18:14.939 1.897 - 1.905: 98.6092% ( 44) 00:18:14.939 1.905 - 1.912: 98.7930% ( 30) 00:18:14.939 1.912 - 1.920: 99.0259% ( 38) 00:18:14.939 1.920 - 1.928: 99.1668% ( 23) 00:18:14.939 1.928 - 1.935: 99.2280% ( 10) 00:18:14.939 1.935 - 1.943: 99.2525% ( 4) 00:18:14.939 1.943 - 1.950: 99.2648% ( 2) 00:18:14.939 1.950 - 1.966: 99.3016% ( 6) 00:18:14.939 1.996 - 2.011: 99.3077% ( 1) 00:18:14.939 2.011 - 2.027: 99.3199% ( 2) 00:18:14.939 2.027 - 2.042: 99.3261% ( 1) 00:18:14.939 2.072 - 2.088: 99.3322% ( 1) 00:18:14.939 2.286 - 2.301: 99.3383% ( 1) 00:18:14.939 3.657 - 3.672: 99.3444% ( 1) 00:18:14.939 3.764 - 3.779: 99.3506% ( 1) 00:18:14.939 3.855 - 3.870: 99.3628% ( 2) 00:18:14.939 3.901 - 3.931: 99.3689% ( 1) 00:18:14.939 3.931 - 3.962: 99.3751% ( 1) 00:18:14.939 3.962 - 3.992: 99.3812% ( 1) 00:18:14.939 4.023 - 4.053: 99.3873% ( 1) 00:18:14.939 4.206 - 4.236: 99.3935% ( 1) 00:18:14.939 4.267 - 4.297: 99.4057% ( 2) 00:18:14.939 4.510 - 4.541: 99.4118% ( 1) 00:18:14.939 4.632 - 4.663: 99.4180% ( 1) 00:18:14.939 4.846 - 4.876: 99.4241% ( 1) 00:18:14.939 5.211 - 5.242: 99.4302% ( 1) 00:18:14.939 5.912 - 5.943: 99.4363% ( 1) 00:18:14.939 6.278 - 6.309: 99.4425% ( 1) 00:18:14.939 6.461 - 6.491: 99.4486% ( 1) 00:18:14.939 6.522 - 6.552: 99.4609% ( 2) 00:18:14.939 6.888 - 6.918: 99.4670% ( 1) 00:18:14.939 6.918 - 6.949: 99.4731% ( 1) 00:18:14.939 7.406 - 7.436: 99.4792% ( 1) 00:18:14.939 7.497 - 7.528: 99.4854% ( 1) 00:18:14.939 7.771 - 7.802: 99.4915% ( 1) 00:18:14.939 8.472 - 8.533: 99.4976% ( 1) 00:18:14.939 11.581 - 11.642: 99.5037% ( 1) 00:18:14.939 11.947 - 12.008: 99.5099% ( 1) 00:18:14.939 12.922 - 12.983: 99.5160% ( 1) 00:18:14.939 38.522 - 38.766: 99.5221% ( 1) 00:18:14.939 161.890 - 162.865: 99.5282% ( 1) 00:18:14.939 3620.084 - 3635.688: 99.5344% ( 1) 00:18:14.939 3994.575 - 4025.783: 100.0000% ( 76) 00:18:14.939 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:14.939 [ 00:18:14.939 { 00:18:14.939 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:14.939 "subtype": "Discovery", 00:18:14.939 "listen_addresses": [], 00:18:14.939 "allow_any_host": true, 00:18:14.939 "hosts": [] 00:18:14.939 }, 00:18:14.939 { 00:18:14.939 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:14.939 "subtype": "NVMe", 00:18:14.939 "listen_addresses": [ 00:18:14.939 { 00:18:14.939 "trtype": "VFIOUSER", 00:18:14.939 "adrfam": "IPv4", 00:18:14.939 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:14.939 "trsvcid": "0" 00:18:14.939 } 00:18:14.939 ], 00:18:14.939 "allow_any_host": true, 00:18:14.939 "hosts": [], 00:18:14.939 "serial_number": "SPDK1", 00:18:14.939 "model_number": "SPDK bdev Controller", 00:18:14.939 "max_namespaces": 32, 00:18:14.939 "min_cntlid": 1, 00:18:14.939 "max_cntlid": 65519, 00:18:14.939 "namespaces": [ 00:18:14.939 { 00:18:14.939 "nsid": 1, 00:18:14.939 "bdev_name": "Malloc1", 00:18:14.939 "name": "Malloc1", 00:18:14.939 "nguid": "4820BDD694BC40BD8F2368E7E35AE8AD", 00:18:14.939 "uuid": "4820bdd6-94bc-40bd-8f23-68e7e35ae8ad" 00:18:14.939 } 00:18:14.939 ] 00:18:14.939 }, 00:18:14.939 { 00:18:14.939 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:14.939 "subtype": "NVMe", 00:18:14.939 "listen_addresses": [ 00:18:14.939 { 00:18:14.939 "trtype": "VFIOUSER", 00:18:14.939 "adrfam": "IPv4", 00:18:14.939 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:14.939 "trsvcid": "0" 00:18:14.939 } 00:18:14.939 ], 00:18:14.939 "allow_any_host": true, 00:18:14.939 "hosts": [], 00:18:14.939 "serial_number": "SPDK2", 00:18:14.939 "model_number": "SPDK bdev Controller", 00:18:14.939 "max_namespaces": 32, 00:18:14.939 "min_cntlid": 1, 00:18:14.939 "max_cntlid": 65519, 00:18:14.939 "namespaces": [ 00:18:14.939 { 00:18:14.939 "nsid": 1, 00:18:14.939 "bdev_name": "Malloc2", 00:18:14.939 "name": "Malloc2", 00:18:14.939 "nguid": "C2433792DE614E7490B1468E0E22FAEA", 00:18:14.939 "uuid": "c2433792-de61-4e74-90b1-468e0e22faea" 00:18:14.939 } 00:18:14.939 ] 00:18:14.939 } 00:18:14.939 ] 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=300255 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:14.939 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:15.266 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:15.266 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:15.266 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:15.266 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:15.266 [2024-12-13 05:34:15.035867] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:15.266 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:15.266 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:15.266 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:15.266 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:15.266 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:15.543 Malloc3 00:18:15.543 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:15.543 [2024-12-13 05:34:15.471013] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:15.543 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:15.543 Asynchronous Event Request test 00:18:15.543 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:15.543 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:15.543 Registering asynchronous event callbacks... 00:18:15.543 Starting namespace attribute notice tests for all controllers... 00:18:15.543 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:15.543 aer_cb - Changed Namespace 00:18:15.543 Cleaning up... 00:18:15.808 [ 00:18:15.808 { 00:18:15.808 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:15.808 "subtype": "Discovery", 00:18:15.808 "listen_addresses": [], 00:18:15.808 "allow_any_host": true, 00:18:15.808 "hosts": [] 00:18:15.808 }, 00:18:15.808 { 00:18:15.808 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:15.808 "subtype": "NVMe", 00:18:15.808 "listen_addresses": [ 00:18:15.809 { 00:18:15.809 "trtype": "VFIOUSER", 00:18:15.809 "adrfam": "IPv4", 00:18:15.809 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:15.809 "trsvcid": "0" 00:18:15.809 } 00:18:15.809 ], 00:18:15.809 "allow_any_host": true, 00:18:15.809 "hosts": [], 00:18:15.809 "serial_number": "SPDK1", 00:18:15.809 "model_number": "SPDK bdev Controller", 00:18:15.809 "max_namespaces": 32, 00:18:15.809 "min_cntlid": 1, 00:18:15.809 "max_cntlid": 65519, 00:18:15.809 "namespaces": [ 00:18:15.809 { 00:18:15.809 "nsid": 1, 00:18:15.809 "bdev_name": "Malloc1", 00:18:15.809 "name": "Malloc1", 00:18:15.809 "nguid": "4820BDD694BC40BD8F2368E7E35AE8AD", 00:18:15.809 "uuid": "4820bdd6-94bc-40bd-8f23-68e7e35ae8ad" 00:18:15.809 }, 00:18:15.809 { 00:18:15.809 "nsid": 2, 00:18:15.809 "bdev_name": "Malloc3", 00:18:15.809 "name": "Malloc3", 00:18:15.809 "nguid": "8FBB09051194415AB12CFA8C273DB026", 00:18:15.809 "uuid": "8fbb0905-1194-415a-b12c-fa8c273db026" 00:18:15.809 } 00:18:15.809 ] 00:18:15.809 }, 00:18:15.809 { 00:18:15.809 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:15.809 "subtype": "NVMe", 00:18:15.809 "listen_addresses": [ 00:18:15.809 { 00:18:15.809 "trtype": "VFIOUSER", 00:18:15.809 "adrfam": "IPv4", 00:18:15.809 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:15.809 "trsvcid": "0" 00:18:15.809 } 00:18:15.809 ], 00:18:15.809 "allow_any_host": true, 00:18:15.809 "hosts": [], 00:18:15.809 "serial_number": "SPDK2", 00:18:15.809 "model_number": "SPDK bdev Controller", 00:18:15.809 "max_namespaces": 32, 00:18:15.809 "min_cntlid": 1, 00:18:15.809 "max_cntlid": 65519, 00:18:15.809 "namespaces": [ 00:18:15.809 { 00:18:15.809 "nsid": 1, 00:18:15.809 "bdev_name": "Malloc2", 00:18:15.809 "name": "Malloc2", 00:18:15.809 "nguid": "C2433792DE614E7490B1468E0E22FAEA", 00:18:15.809 "uuid": "c2433792-de61-4e74-90b1-468e0e22faea" 00:18:15.809 } 00:18:15.809 ] 00:18:15.809 } 00:18:15.809 ] 00:18:15.809 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 300255 00:18:15.809 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:15.809 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:15.809 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:15.809 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:15.809 [2024-12-13 05:34:15.704558] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:15.809 [2024-12-13 05:34:15.704585] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300276 ] 00:18:15.809 [2024-12-13 05:34:15.742658] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:15.809 [2024-12-13 05:34:15.747894] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:15.809 [2024-12-13 05:34:15.747916] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9b80f58000 00:18:15.809 [2024-12-13 05:34:15.748891] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.809 [2024-12-13 05:34:15.749895] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.809 [2024-12-13 05:34:15.750903] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.809 [2024-12-13 05:34:15.751909] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:15.809 [2024-12-13 05:34:15.752918] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:15.809 [2024-12-13 05:34:15.753923] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.809 [2024-12-13 05:34:15.754929] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:15.809 [2024-12-13 05:34:15.755937] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.809 [2024-12-13 05:34:15.756945] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:15.809 [2024-12-13 05:34:15.756955] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9b7fc61000 00:18:15.809 [2024-12-13 05:34:15.757996] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:15.809 [2024-12-13 05:34:15.768195] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:15.809 [2024-12-13 05:34:15.768218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:15.809 [2024-12-13 05:34:15.773326] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:15.809 [2024-12-13 05:34:15.773363] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:15.809 [2024-12-13 05:34:15.773436] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:15.809 [2024-12-13 05:34:15.773455] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:15.809 [2024-12-13 05:34:15.773460] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:15.809 [2024-12-13 05:34:15.774321] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:15.809 [2024-12-13 05:34:15.774331] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:15.809 [2024-12-13 05:34:15.774337] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:15.809 [2024-12-13 05:34:15.775329] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:15.809 [2024-12-13 05:34:15.775337] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:15.809 [2024-12-13 05:34:15.775344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:15.809 [2024-12-13 05:34:15.776340] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:15.809 [2024-12-13 05:34:15.776349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:15.809 [2024-12-13 05:34:15.777350] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:15.809 [2024-12-13 05:34:15.777358] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:15.809 [2024-12-13 05:34:15.777365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:15.809 [2024-12-13 05:34:15.777371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:15.809 [2024-12-13 05:34:15.777478] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:15.809 [2024-12-13 05:34:15.777482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:15.809 [2024-12-13 05:34:15.777487] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:15.809 [2024-12-13 05:34:15.778358] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:15.809 [2024-12-13 05:34:15.779365] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:15.809 [2024-12-13 05:34:15.780370] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:15.809 [2024-12-13 05:34:15.781376] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:15.809 [2024-12-13 05:34:15.781414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:15.809 [2024-12-13 05:34:15.782389] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:15.809 [2024-12-13 05:34:15.782398] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:15.809 [2024-12-13 05:34:15.782402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:15.809 [2024-12-13 05:34:15.782419] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:15.809 [2024-12-13 05:34:15.782426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:15.809 [2024-12-13 05:34:15.782435] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:15.809 [2024-12-13 05:34:15.782439] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:15.809 [2024-12-13 05:34:15.782443] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.809 [2024-12-13 05:34:15.782461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:15.809 [2024-12-13 05:34:15.790458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:15.809 [2024-12-13 05:34:15.790470] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:15.809 [2024-12-13 05:34:15.790474] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:15.809 [2024-12-13 05:34:15.790478] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:15.810 [2024-12-13 05:34:15.790482] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:15.810 [2024-12-13 05:34:15.790486] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:15.810 [2024-12-13 05:34:15.790493] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:15.810 [2024-12-13 05:34:15.790497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:15.810 [2024-12-13 05:34:15.790506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:15.810 [2024-12-13 05:34:15.790517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:15.810 [2024-12-13 05:34:15.798454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:15.810 [2024-12-13 05:34:15.798466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.810 [2024-12-13 05:34:15.798473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.810 [2024-12-13 05:34:15.798481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.810 [2024-12-13 05:34:15.798488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.810 [2024-12-13 05:34:15.798492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:15.810 [2024-12-13 05:34:15.798502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:15.810 [2024-12-13 05:34:15.798510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:15.810 [2024-12-13 05:34:15.806455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:15.810 [2024-12-13 05:34:15.806463] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:15.810 [2024-12-13 05:34:15.806467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:15.810 [2024-12-13 05:34:15.806473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:15.810 [2024-12-13 05:34:15.806478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:15.810 [2024-12-13 05:34:15.806486] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:15.810 [2024-12-13 05:34:15.814454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:15.810 [2024-12-13 05:34:15.814504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:15.810 [2024-12-13 05:34:15.814515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:15.810 [2024-12-13 05:34:15.814521] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:15.810 [2024-12-13 05:34:15.814525] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:15.810 [2024-12-13 05:34:15.814528] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.810 [2024-12-13 05:34:15.814534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:16.082 [2024-12-13 05:34:15.822456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:16.082 [2024-12-13 05:34:15.822466] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:16.082 [2024-12-13 05:34:15.822477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:16.082 [2024-12-13 05:34:15.822483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:16.082 [2024-12-13 05:34:15.822489] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:16.082 [2024-12-13 05:34:15.822493] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:16.082 [2024-12-13 05:34:15.822496] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:16.082 [2024-12-13 05:34:15.822502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:16.082 [2024-12-13 05:34:15.830455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:16.082 [2024-12-13 05:34:15.830469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:16.082 [2024-12-13 05:34:15.830476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:16.082 [2024-12-13 05:34:15.830482] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:16.082 [2024-12-13 05:34:15.830486] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:16.082 [2024-12-13 05:34:15.830489] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:16.082 [2024-12-13 05:34:15.830495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:16.082 [2024-12-13 05:34:15.838455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:16.082 [2024-12-13 05:34:15.838465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:16.082 [2024-12-13 05:34:15.838471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:16.082 [2024-12-13 05:34:15.838478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:16.082 [2024-12-13 05:34:15.838483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:16.082 [2024-12-13 05:34:15.838487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:16.082 [2024-12-13 05:34:15.838492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:16.082 [2024-12-13 05:34:15.838496] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:16.082 [2024-12-13 05:34:15.838500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:16.082 [2024-12-13 05:34:15.838504] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:16.082 [2024-12-13 05:34:15.838521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:16.082 [2024-12-13 05:34:15.846455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:16.082 [2024-12-13 05:34:15.846468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:16.082 [2024-12-13 05:34:15.854454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:16.082 [2024-12-13 05:34:15.854468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:16.082 [2024-12-13 05:34:15.862453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:16.082 [2024-12-13 05:34:15.862465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:16.082 [2024-12-13 05:34:15.870455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:16.082 [2024-12-13 05:34:15.870473] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:16.082 [2024-12-13 05:34:15.870477] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:16.082 [2024-12-13 05:34:15.870480] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:16.082 [2024-12-13 05:34:15.870483] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:16.082 [2024-12-13 05:34:15.870486] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:16.082 [2024-12-13 05:34:15.870492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:16.082 [2024-12-13 05:34:15.870498] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:16.082 [2024-12-13 05:34:15.870502] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:16.082 [2024-12-13 05:34:15.870505] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:16.082 [2024-12-13 05:34:15.870510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:16.082 [2024-12-13 05:34:15.870516] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:16.082 [2024-12-13 05:34:15.870520] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:16.082 [2024-12-13 05:34:15.870523] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:16.082 [2024-12-13 05:34:15.870528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:16.082 [2024-12-13 05:34:15.870534] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:16.082 [2024-12-13 05:34:15.870538] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:16.082 [2024-12-13 05:34:15.870541] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:16.082 [2024-12-13 05:34:15.870546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:16.082 [2024-12-13 05:34:15.878455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:16.082 [2024-12-13 05:34:15.878468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:16.082 [2024-12-13 05:34:15.878478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:16.082 [2024-12-13 05:34:15.878486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:16.082 ===================================================== 00:18:16.082 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:16.082 ===================================================== 00:18:16.082 Controller Capabilities/Features 00:18:16.082 ================================ 00:18:16.082 Vendor ID: 4e58 00:18:16.082 Subsystem Vendor ID: 4e58 00:18:16.082 Serial Number: SPDK2 00:18:16.082 Model Number: SPDK bdev Controller 00:18:16.082 Firmware Version: 25.01 00:18:16.082 Recommended Arb Burst: 6 00:18:16.082 IEEE OUI Identifier: 8d 6b 50 00:18:16.082 Multi-path I/O 00:18:16.082 May have multiple subsystem ports: Yes 00:18:16.082 May have multiple controllers: Yes 00:18:16.082 Associated with SR-IOV VF: No 00:18:16.082 Max Data Transfer Size: 131072 00:18:16.082 Max Number of Namespaces: 32 00:18:16.082 Max Number of I/O Queues: 127 00:18:16.082 NVMe Specification Version (VS): 1.3 00:18:16.082 NVMe Specification Version (Identify): 1.3 00:18:16.082 Maximum Queue Entries: 256 00:18:16.082 Contiguous Queues Required: Yes 00:18:16.082 Arbitration Mechanisms Supported 00:18:16.082 Weighted Round Robin: Not Supported 00:18:16.082 Vendor Specific: Not Supported 00:18:16.082 Reset Timeout: 15000 ms 00:18:16.082 Doorbell Stride: 4 bytes 00:18:16.082 NVM Subsystem Reset: Not Supported 00:18:16.082 Command Sets Supported 00:18:16.082 NVM Command Set: Supported 00:18:16.082 Boot Partition: Not Supported 00:18:16.082 Memory Page Size Minimum: 4096 bytes 00:18:16.082 Memory Page Size Maximum: 4096 bytes 00:18:16.082 Persistent Memory Region: Not Supported 00:18:16.082 Optional Asynchronous Events Supported 00:18:16.082 Namespace Attribute Notices: Supported 00:18:16.082 Firmware Activation Notices: Not Supported 00:18:16.082 ANA Change Notices: Not Supported 00:18:16.082 PLE Aggregate Log Change Notices: Not Supported 00:18:16.082 LBA Status Info Alert Notices: Not Supported 00:18:16.082 EGE Aggregate Log Change Notices: Not Supported 00:18:16.082 Normal NVM Subsystem Shutdown event: Not Supported 00:18:16.082 Zone Descriptor Change Notices: Not Supported 00:18:16.082 Discovery Log Change Notices: Not Supported 00:18:16.082 Controller Attributes 00:18:16.082 128-bit Host Identifier: Supported 00:18:16.082 Non-Operational Permissive Mode: Not Supported 00:18:16.082 NVM Sets: Not Supported 00:18:16.082 Read Recovery Levels: Not Supported 00:18:16.082 Endurance Groups: Not Supported 00:18:16.082 Predictable Latency Mode: Not Supported 00:18:16.082 Traffic Based Keep ALive: Not Supported 00:18:16.082 Namespace Granularity: Not Supported 00:18:16.082 SQ Associations: Not Supported 00:18:16.082 UUID List: Not Supported 00:18:16.082 Multi-Domain Subsystem: Not Supported 00:18:16.082 Fixed Capacity Management: Not Supported 00:18:16.082 Variable Capacity Management: Not Supported 00:18:16.082 Delete Endurance Group: Not Supported 00:18:16.082 Delete NVM Set: Not Supported 00:18:16.083 Extended LBA Formats Supported: Not Supported 00:18:16.083 Flexible Data Placement Supported: Not Supported 00:18:16.083 00:18:16.083 Controller Memory Buffer Support 00:18:16.083 ================================ 00:18:16.083 Supported: No 00:18:16.083 00:18:16.083 Persistent Memory Region Support 00:18:16.083 ================================ 00:18:16.083 Supported: No 00:18:16.083 00:18:16.083 Admin Command Set Attributes 00:18:16.083 ============================ 00:18:16.083 Security Send/Receive: Not Supported 00:18:16.083 Format NVM: Not Supported 00:18:16.083 Firmware Activate/Download: Not Supported 00:18:16.083 Namespace Management: Not Supported 00:18:16.083 Device Self-Test: Not Supported 00:18:16.083 Directives: Not Supported 00:18:16.083 NVMe-MI: Not Supported 00:18:16.083 Virtualization Management: Not Supported 00:18:16.083 Doorbell Buffer Config: Not Supported 00:18:16.083 Get LBA Status Capability: Not Supported 00:18:16.083 Command & Feature Lockdown Capability: Not Supported 00:18:16.083 Abort Command Limit: 4 00:18:16.083 Async Event Request Limit: 4 00:18:16.083 Number of Firmware Slots: N/A 00:18:16.083 Firmware Slot 1 Read-Only: N/A 00:18:16.083 Firmware Activation Without Reset: N/A 00:18:16.083 Multiple Update Detection Support: N/A 00:18:16.083 Firmware Update Granularity: No Information Provided 00:18:16.083 Per-Namespace SMART Log: No 00:18:16.083 Asymmetric Namespace Access Log Page: Not Supported 00:18:16.083 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:16.083 Command Effects Log Page: Supported 00:18:16.083 Get Log Page Extended Data: Supported 00:18:16.083 Telemetry Log Pages: Not Supported 00:18:16.083 Persistent Event Log Pages: Not Supported 00:18:16.083 Supported Log Pages Log Page: May Support 00:18:16.083 Commands Supported & Effects Log Page: Not Supported 00:18:16.083 Feature Identifiers & Effects Log Page:May Support 00:18:16.083 NVMe-MI Commands & Effects Log Page: May Support 00:18:16.083 Data Area 4 for Telemetry Log: Not Supported 00:18:16.083 Error Log Page Entries Supported: 128 00:18:16.083 Keep Alive: Supported 00:18:16.083 Keep Alive Granularity: 10000 ms 00:18:16.083 00:18:16.083 NVM Command Set Attributes 00:18:16.083 ========================== 00:18:16.083 Submission Queue Entry Size 00:18:16.083 Max: 64 00:18:16.083 Min: 64 00:18:16.083 Completion Queue Entry Size 00:18:16.083 Max: 16 00:18:16.083 Min: 16 00:18:16.083 Number of Namespaces: 32 00:18:16.083 Compare Command: Supported 00:18:16.083 Write Uncorrectable Command: Not Supported 00:18:16.083 Dataset Management Command: Supported 00:18:16.083 Write Zeroes Command: Supported 00:18:16.083 Set Features Save Field: Not Supported 00:18:16.083 Reservations: Not Supported 00:18:16.083 Timestamp: Not Supported 00:18:16.083 Copy: Supported 00:18:16.083 Volatile Write Cache: Present 00:18:16.083 Atomic Write Unit (Normal): 1 00:18:16.083 Atomic Write Unit (PFail): 1 00:18:16.083 Atomic Compare & Write Unit: 1 00:18:16.083 Fused Compare & Write: Supported 00:18:16.083 Scatter-Gather List 00:18:16.083 SGL Command Set: Supported (Dword aligned) 00:18:16.083 SGL Keyed: Not Supported 00:18:16.083 SGL Bit Bucket Descriptor: Not Supported 00:18:16.083 SGL Metadata Pointer: Not Supported 00:18:16.083 Oversized SGL: Not Supported 00:18:16.083 SGL Metadata Address: Not Supported 00:18:16.083 SGL Offset: Not Supported 00:18:16.083 Transport SGL Data Block: Not Supported 00:18:16.083 Replay Protected Memory Block: Not Supported 00:18:16.083 00:18:16.083 Firmware Slot Information 00:18:16.083 ========================= 00:18:16.083 Active slot: 1 00:18:16.083 Slot 1 Firmware Revision: 25.01 00:18:16.083 00:18:16.083 00:18:16.083 Commands Supported and Effects 00:18:16.083 ============================== 00:18:16.083 Admin Commands 00:18:16.083 -------------- 00:18:16.083 Get Log Page (02h): Supported 00:18:16.083 Identify (06h): Supported 00:18:16.083 Abort (08h): Supported 00:18:16.083 Set Features (09h): Supported 00:18:16.083 Get Features (0Ah): Supported 00:18:16.083 Asynchronous Event Request (0Ch): Supported 00:18:16.083 Keep Alive (18h): Supported 00:18:16.083 I/O Commands 00:18:16.083 ------------ 00:18:16.083 Flush (00h): Supported LBA-Change 00:18:16.083 Write (01h): Supported LBA-Change 00:18:16.083 Read (02h): Supported 00:18:16.083 Compare (05h): Supported 00:18:16.083 Write Zeroes (08h): Supported LBA-Change 00:18:16.083 Dataset Management (09h): Supported LBA-Change 00:18:16.083 Copy (19h): Supported LBA-Change 00:18:16.083 00:18:16.083 Error Log 00:18:16.083 ========= 00:18:16.083 00:18:16.083 Arbitration 00:18:16.083 =========== 00:18:16.083 Arbitration Burst: 1 00:18:16.083 00:18:16.083 Power Management 00:18:16.083 ================ 00:18:16.083 Number of Power States: 1 00:18:16.083 Current Power State: Power State #0 00:18:16.083 Power State #0: 00:18:16.083 Max Power: 0.00 W 00:18:16.083 Non-Operational State: Operational 00:18:16.083 Entry Latency: Not Reported 00:18:16.083 Exit Latency: Not Reported 00:18:16.083 Relative Read Throughput: 0 00:18:16.083 Relative Read Latency: 0 00:18:16.083 Relative Write Throughput: 0 00:18:16.083 Relative Write Latency: 0 00:18:16.083 Idle Power: Not Reported 00:18:16.083 Active Power: Not Reported 00:18:16.083 Non-Operational Permissive Mode: Not Supported 00:18:16.083 00:18:16.083 Health Information 00:18:16.083 ================== 00:18:16.083 Critical Warnings: 00:18:16.083 Available Spare Space: OK 00:18:16.083 Temperature: OK 00:18:16.083 Device Reliability: OK 00:18:16.083 Read Only: No 00:18:16.083 Volatile Memory Backup: OK 00:18:16.083 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:16.083 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:16.083 Available Spare: 0% 00:18:16.083 Available Sp[2024-12-13 05:34:15.878573] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:16.083 [2024-12-13 05:34:15.886456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:16.083 [2024-12-13 05:34:15.886488] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:16.083 [2024-12-13 05:34:15.886496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.083 [2024-12-13 05:34:15.886502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.083 [2024-12-13 05:34:15.886507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.083 [2024-12-13 05:34:15.886513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.083 [2024-12-13 05:34:15.886556] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:16.083 [2024-12-13 05:34:15.886567] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:16.083 [2024-12-13 05:34:15.887556] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:16.083 [2024-12-13 05:34:15.887598] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:16.083 [2024-12-13 05:34:15.887605] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:16.083 [2024-12-13 05:34:15.888562] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:16.083 [2024-12-13 05:34:15.888572] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:16.083 [2024-12-13 05:34:15.888628] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:16.083 [2024-12-13 05:34:15.889585] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:16.083 are Threshold: 0% 00:18:16.083 Life Percentage Used: 0% 00:18:16.083 Data Units Read: 0 00:18:16.083 Data Units Written: 0 00:18:16.083 Host Read Commands: 0 00:18:16.083 Host Write Commands: 0 00:18:16.083 Controller Busy Time: 0 minutes 00:18:16.083 Power Cycles: 0 00:18:16.083 Power On Hours: 0 hours 00:18:16.083 Unsafe Shutdowns: 0 00:18:16.083 Unrecoverable Media Errors: 0 00:18:16.083 Lifetime Error Log Entries: 0 00:18:16.083 Warning Temperature Time: 0 minutes 00:18:16.083 Critical Temperature Time: 0 minutes 00:18:16.083 00:18:16.083 Number of Queues 00:18:16.083 ================ 00:18:16.083 Number of I/O Submission Queues: 127 00:18:16.083 Number of I/O Completion Queues: 127 00:18:16.083 00:18:16.083 Active Namespaces 00:18:16.083 ================= 00:18:16.083 Namespace ID:1 00:18:16.083 Error Recovery Timeout: Unlimited 00:18:16.083 Command Set Identifier: NVM (00h) 00:18:16.083 Deallocate: Supported 00:18:16.083 Deallocated/Unwritten Error: Not Supported 00:18:16.083 Deallocated Read Value: Unknown 00:18:16.083 Deallocate in Write Zeroes: Not Supported 00:18:16.083 Deallocated Guard Field: 0xFFFF 00:18:16.083 Flush: Supported 00:18:16.083 Reservation: Supported 00:18:16.083 Namespace Sharing Capabilities: Multiple Controllers 00:18:16.083 Size (in LBAs): 131072 (0GiB) 00:18:16.083 Capacity (in LBAs): 131072 (0GiB) 00:18:16.083 Utilization (in LBAs): 131072 (0GiB) 00:18:16.083 NGUID: C2433792DE614E7490B1468E0E22FAEA 00:18:16.084 UUID: c2433792-de61-4e74-90b1-468e0e22faea 00:18:16.084 Thin Provisioning: Not Supported 00:18:16.084 Per-NS Atomic Units: Yes 00:18:16.084 Atomic Boundary Size (Normal): 0 00:18:16.084 Atomic Boundary Size (PFail): 0 00:18:16.084 Atomic Boundary Offset: 0 00:18:16.084 Maximum Single Source Range Length: 65535 00:18:16.084 Maximum Copy Length: 65535 00:18:16.084 Maximum Source Range Count: 1 00:18:16.084 NGUID/EUI64 Never Reused: No 00:18:16.084 Namespace Write Protected: No 00:18:16.084 Number of LBA Formats: 1 00:18:16.084 Current LBA Format: LBA Format #00 00:18:16.084 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:16.084 00:18:16.084 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:16.362 [2024-12-13 05:34:16.117786] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:21.749 Initializing NVMe Controllers 00:18:21.749 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:21.749 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:21.749 Initialization complete. Launching workers. 00:18:21.749 ======================================================== 00:18:21.749 Latency(us) 00:18:21.749 Device Information : IOPS MiB/s Average min max 00:18:21.749 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39949.72 156.05 3203.87 981.43 8599.08 00:18:21.749 ======================================================== 00:18:21.749 Total : 39949.72 156.05 3203.87 981.43 8599.08 00:18:21.749 00:18:21.749 [2024-12-13 05:34:21.223722] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:21.749 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:21.749 [2024-12-13 05:34:21.461474] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:27.168 Initializing NVMe Controllers 00:18:27.168 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:27.168 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:27.168 Initialization complete. Launching workers. 00:18:27.168 ======================================================== 00:18:27.168 Latency(us) 00:18:27.168 Device Information : IOPS MiB/s Average min max 00:18:27.168 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39907.20 155.89 3207.67 978.14 10578.72 00:18:27.168 ======================================================== 00:18:27.168 Total : 39907.20 155.89 3207.67 978.14 10578.72 00:18:27.168 00:18:27.168 [2024-12-13 05:34:26.483502] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:27.168 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:27.168 [2024-12-13 05:34:26.686741] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:32.641 [2024-12-13 05:34:31.822545] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:32.641 Initializing NVMe Controllers 00:18:32.641 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:32.641 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:32.641 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:32.641 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:32.641 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:32.641 Initialization complete. Launching workers. 00:18:32.641 Starting thread on core 2 00:18:32.641 Starting thread on core 3 00:18:32.641 Starting thread on core 1 00:18:32.641 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:32.641 [2024-12-13 05:34:32.116848] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:35.306 [2024-12-13 05:34:35.192356] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:35.306 Initializing NVMe Controllers 00:18:35.306 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:35.306 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:35.306 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:35.306 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:35.306 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:35.306 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:35.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:35.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:35.306 Initialization complete. Launching workers. 00:18:35.306 Starting thread on core 1 with urgent priority queue 00:18:35.306 Starting thread on core 2 with urgent priority queue 00:18:35.306 Starting thread on core 3 with urgent priority queue 00:18:35.306 Starting thread on core 0 with urgent priority queue 00:18:35.306 SPDK bdev Controller (SPDK2 ) core 0: 8961.67 IO/s 11.16 secs/100000 ios 00:18:35.306 SPDK bdev Controller (SPDK2 ) core 1: 7318.33 IO/s 13.66 secs/100000 ios 00:18:35.306 SPDK bdev Controller (SPDK2 ) core 2: 7483.67 IO/s 13.36 secs/100000 ios 00:18:35.306 SPDK bdev Controller (SPDK2 ) core 3: 10054.00 IO/s 9.95 secs/100000 ios 00:18:35.306 ======================================================== 00:18:35.306 00:18:35.306 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:35.597 [2024-12-13 05:34:35.483944] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:35.597 Initializing NVMe Controllers 00:18:35.597 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:35.597 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:35.597 Namespace ID: 1 size: 0GB 00:18:35.597 Initialization complete. 00:18:35.597 INFO: using host memory buffer for IO 00:18:35.597 Hello world! 00:18:35.597 [2024-12-13 05:34:35.496027] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:35.597 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:35.866 [2024-12-13 05:34:35.772466] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:36.861 Initializing NVMe Controllers 00:18:36.861 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:36.861 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:36.861 Initialization complete. Launching workers. 00:18:36.861 submit (in ns) avg, min, max = 7303.2, 3120.0, 4029292.4 00:18:36.861 complete (in ns) avg, min, max = 20890.5, 1720.0, 4003614.3 00:18:36.861 00:18:36.861 Submit histogram 00:18:36.861 ================ 00:18:36.861 Range in us Cumulative Count 00:18:36.861 3.109 - 3.124: 0.0061% ( 1) 00:18:36.861 3.124 - 3.139: 0.0738% ( 11) 00:18:36.861 3.139 - 3.154: 0.2890% ( 35) 00:18:36.861 3.154 - 3.170: 0.5596% ( 44) 00:18:36.861 3.170 - 3.185: 0.7871% ( 37) 00:18:36.861 3.185 - 3.200: 1.3713% ( 95) 00:18:36.861 3.200 - 3.215: 3.0992% ( 281) 00:18:36.861 3.215 - 3.230: 7.9449% ( 788) 00:18:36.861 3.230 - 3.246: 13.6884% ( 934) 00:18:36.861 3.246 - 3.261: 19.4195% ( 932) 00:18:36.861 3.261 - 3.276: 26.6634% ( 1178) 00:18:36.861 3.276 - 3.291: 33.8212% ( 1164) 00:18:36.861 3.291 - 3.307: 39.8659% ( 983) 00:18:36.861 3.307 - 3.322: 45.5233% ( 920) 00:18:36.861 3.322 - 3.337: 49.9570% ( 721) 00:18:36.861 3.337 - 3.352: 54.2061% ( 691) 00:18:36.861 3.352 - 3.368: 58.2893% ( 664) 00:18:36.861 3.368 - 3.383: 66.2834% ( 1300) 00:18:36.861 3.383 - 3.398: 71.9407% ( 920) 00:18:36.861 3.398 - 3.413: 76.5466% ( 749) 00:18:36.861 3.413 - 3.429: 81.5521% ( 814) 00:18:36.861 3.429 - 3.444: 84.9158% ( 547) 00:18:36.861 3.444 - 3.459: 87.1234% ( 359) 00:18:36.861 3.459 - 3.474: 88.1072% ( 160) 00:18:36.861 3.474 - 3.490: 88.7037% ( 97) 00:18:36.861 3.490 - 3.505: 89.0419% ( 55) 00:18:36.861 3.505 - 3.520: 89.5216% ( 78) 00:18:36.861 3.520 - 3.535: 90.2656% ( 121) 00:18:36.861 3.535 - 3.550: 91.0343% ( 125) 00:18:36.861 3.550 - 3.566: 91.9137% ( 143) 00:18:36.861 3.566 - 3.581: 92.7992% ( 144) 00:18:36.861 3.581 - 3.596: 93.5432% ( 121) 00:18:36.861 3.596 - 3.611: 94.3242% ( 127) 00:18:36.861 3.611 - 3.627: 95.0744% ( 122) 00:18:36.861 3.627 - 3.642: 95.9599% ( 144) 00:18:36.861 3.642 - 3.657: 96.8270% ( 141) 00:18:36.861 3.657 - 3.672: 97.5833% ( 123) 00:18:36.861 3.672 - 3.688: 98.0630% ( 78) 00:18:36.861 3.688 - 3.703: 98.4073% ( 56) 00:18:36.861 3.703 - 3.718: 98.7701% ( 59) 00:18:36.861 3.718 - 3.733: 98.9854% ( 35) 00:18:36.861 3.733 - 3.749: 99.1760% ( 31) 00:18:36.861 3.749 - 3.764: 99.3605% ( 30) 00:18:36.861 3.764 - 3.779: 99.4589% ( 16) 00:18:36.861 3.779 - 3.794: 99.5265% ( 11) 00:18:36.861 3.794 - 3.810: 99.5695% ( 7) 00:18:36.861 3.810 - 3.825: 99.5757% ( 1) 00:18:36.861 3.825 - 3.840: 99.5880% ( 2) 00:18:36.861 3.840 - 3.855: 99.6126% ( 4) 00:18:36.861 3.886 - 3.901: 99.6187% ( 1) 00:18:36.861 3.931 - 3.962: 99.6249% ( 1) 00:18:36.861 5.150 - 5.181: 99.6310% ( 1) 00:18:36.861 5.211 - 5.242: 99.6372% ( 1) 00:18:36.861 5.394 - 5.425: 99.6433% ( 1) 00:18:36.861 5.486 - 5.516: 99.6495% ( 1) 00:18:36.861 5.790 - 5.821: 99.6556% ( 1) 00:18:36.861 5.912 - 5.943: 99.6618% ( 1) 00:18:36.861 5.943 - 5.973: 99.6679% ( 1) 00:18:36.861 6.004 - 6.034: 99.6741% ( 1) 00:18:36.861 6.065 - 6.095: 99.6802% ( 1) 00:18:36.861 6.095 - 6.126: 99.6864% ( 1) 00:18:36.861 6.126 - 6.156: 99.6925% ( 1) 00:18:36.861 6.248 - 6.278: 99.6987% ( 1) 00:18:36.861 6.278 - 6.309: 99.7048% ( 1) 00:18:36.861 6.309 - 6.339: 99.7110% ( 1) 00:18:36.861 6.339 - 6.370: 99.7294% ( 3) 00:18:36.861 6.674 - 6.705: 99.7356% ( 1) 00:18:36.861 6.766 - 6.796: 99.7417% ( 1) 00:18:36.861 6.796 - 6.827: 99.7479% ( 1) 00:18:36.861 6.827 - 6.857: 99.7540% ( 1) 00:18:36.861 6.979 - 7.010: 99.7663% ( 2) 00:18:36.861 7.010 - 7.040: 99.7725% ( 1) 00:18:36.861 7.040 - 7.070: 99.7786% ( 1) 00:18:36.861 7.223 - 7.253: 99.7848% ( 1) 00:18:36.861 7.314 - 7.345: 99.7971% ( 2) 00:18:36.861 7.375 - 7.406: 99.8032% ( 1) 00:18:36.861 7.710 - 7.741: 99.8094% ( 1) 00:18:36.861 7.863 - 7.924: 99.8155% ( 1) 00:18:36.861 7.924 - 7.985: 99.8217% ( 1) 00:18:37.145 [2024-12-13 05:34:36.875468] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:37.145 7.985 - 8.046: 99.8278% ( 1) 00:18:37.145 8.168 - 8.229: 99.8340% ( 1) 00:18:37.145 8.350 - 8.411: 99.8401% ( 1) 00:18:37.145 8.655 - 8.716: 99.8463% ( 1) 00:18:37.145 8.838 - 8.899: 99.8524% ( 1) 00:18:37.145 9.326 - 9.387: 99.8586% ( 1) 00:18:37.145 9.935 - 9.996: 99.8647% ( 1) 00:18:37.145 13.653 - 13.714: 99.8709% ( 1) 00:18:37.145 14.811 - 14.872: 99.8770% ( 1) 00:18:37.145 19.139 - 19.261: 99.8893% ( 2) 00:18:37.145 19.261 - 19.383: 99.8955% ( 1) 00:18:37.145 19.383 - 19.505: 99.9016% ( 1) 00:18:37.145 3994.575 - 4025.783: 99.9939% ( 15) 00:18:37.145 4025.783 - 4056.990: 100.0000% ( 1) 00:18:37.145 00:18:37.145 Complete histogram 00:18:37.145 ================== 00:18:37.145 Range in us Cumulative Count 00:18:37.145 1.714 - 1.722: 0.0184% ( 3) 00:18:37.145 1.722 - 1.730: 0.1599% ( 23) 00:18:37.145 1.730 - 1.737: 0.7256% ( 92) 00:18:37.145 1.737 - 1.745: 1.3528% ( 102) 00:18:37.145 1.745 - 1.752: 2.0846% ( 119) 00:18:37.145 1.752 - 1.760: 2.8225% ( 120) 00:18:37.145 1.760 - 1.768: 3.2653% ( 72) 00:18:37.145 1.768 - 1.775: 4.0524% ( 128) 00:18:37.145 1.775 - 1.783: 8.0925% ( 657) 00:18:37.145 1.783 - 1.790: 20.3849% ( 1999) 00:18:37.145 1.790 - 1.798: 34.8420% ( 2351) 00:18:37.145 1.798 - 1.806: 48.5918% ( 2236) 00:18:37.145 1.806 - 1.813: 60.9212% ( 2005) 00:18:37.145 1.813 - 1.821: 69.7639% ( 1438) 00:18:37.145 1.821 - 1.829: 76.5035% ( 1096) 00:18:37.145 1.829 - 1.836: 82.2962% ( 942) 00:18:37.145 1.836 - 1.844: 87.5231% ( 850) 00:18:37.145 1.844 - 1.851: 92.0244% ( 732) 00:18:37.145 1.851 - 1.859: 94.7915% ( 450) 00:18:37.145 1.859 - 1.867: 96.0890% ( 211) 00:18:37.145 1.867 - 1.874: 96.7409% ( 106) 00:18:37.145 1.874 - 1.882: 97.3374% ( 97) 00:18:37.145 1.882 - 1.890: 97.7248% ( 63) 00:18:37.145 1.890 - 1.897: 98.0753% ( 57) 00:18:37.145 1.897 - 1.905: 98.3520% ( 45) 00:18:37.145 1.905 - 1.912: 98.6410% ( 47) 00:18:37.145 1.912 - 1.920: 98.8685% ( 37) 00:18:37.145 1.920 - 1.928: 99.1022% ( 38) 00:18:37.145 1.928 - 1.935: 99.1944% ( 15) 00:18:37.145 1.935 - 1.943: 99.2436% ( 8) 00:18:37.145 1.943 - 1.950: 99.2682% ( 4) 00:18:37.145 1.950 - 1.966: 99.2928% ( 4) 00:18:37.145 1.966 - 1.981: 99.3051% ( 2) 00:18:37.145 1.996 - 2.011: 99.3236% ( 3) 00:18:37.145 2.011 - 2.027: 99.3359% ( 2) 00:18:37.145 2.072 - 2.088: 99.3420% ( 1) 00:18:37.145 2.118 - 2.133: 99.3482% ( 1) 00:18:37.145 2.194 - 2.210: 99.3605% ( 2) 00:18:37.145 2.331 - 2.347: 99.3666% ( 1) 00:18:37.145 3.368 - 3.383: 99.3728% ( 1) 00:18:37.145 4.510 - 4.541: 99.3789% ( 1) 00:18:37.145 4.541 - 4.571: 99.3851% ( 1) 00:18:37.145 4.632 - 4.663: 99.3912% ( 1) 00:18:37.145 4.846 - 4.876: 99.3974% ( 1) 00:18:37.145 4.876 - 4.907: 99.4035% ( 1) 00:18:37.145 5.455 - 5.486: 99.4097% ( 1) 00:18:37.145 5.699 - 5.730: 99.4158% ( 1) 00:18:37.145 6.004 - 6.034: 99.4220% ( 1) 00:18:37.145 6.156 - 6.187: 99.4281% ( 1) 00:18:37.145 6.187 - 6.217: 99.4343% ( 1) 00:18:37.145 6.248 - 6.278: 99.4404% ( 1) 00:18:37.145 6.278 - 6.309: 99.4466% ( 1) 00:18:37.145 6.430 - 6.461: 99.4527% ( 1) 00:18:37.145 6.552 - 6.583: 99.4712% ( 3) 00:18:37.145 6.674 - 6.705: 99.4773% ( 1) 00:18:37.145 6.735 - 6.766: 99.4835% ( 1) 00:18:37.145 7.497 - 7.528: 99.4896% ( 1) 00:18:37.145 7.924 - 7.985: 99.4958% ( 1) 00:18:37.145 8.838 - 8.899: 99.5019% ( 1) 00:18:37.145 12.312 - 12.373: 99.5081% ( 1) 00:18:37.145 13.653 - 13.714: 99.5142% ( 1) 00:18:37.146 17.676 - 17.798: 99.5204% ( 1) 00:18:37.146 3011.535 - 3027.139: 99.5265% ( 1) 00:18:37.146 3479.650 - 3495.253: 99.5327% ( 1) 00:18:37.146 3994.575 - 4025.783: 100.0000% ( 76) 00:18:37.146 00:18:37.146 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:37.146 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:37.146 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:37.146 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:37.146 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:37.146 [ 00:18:37.146 { 00:18:37.146 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:37.146 "subtype": "Discovery", 00:18:37.146 "listen_addresses": [], 00:18:37.146 "allow_any_host": true, 00:18:37.146 "hosts": [] 00:18:37.146 }, 00:18:37.146 { 00:18:37.146 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:37.146 "subtype": "NVMe", 00:18:37.146 "listen_addresses": [ 00:18:37.146 { 00:18:37.146 "trtype": "VFIOUSER", 00:18:37.146 "adrfam": "IPv4", 00:18:37.146 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:37.146 "trsvcid": "0" 00:18:37.146 } 00:18:37.146 ], 00:18:37.146 "allow_any_host": true, 00:18:37.146 "hosts": [], 00:18:37.146 "serial_number": "SPDK1", 00:18:37.146 "model_number": "SPDK bdev Controller", 00:18:37.146 "max_namespaces": 32, 00:18:37.146 "min_cntlid": 1, 00:18:37.146 "max_cntlid": 65519, 00:18:37.146 "namespaces": [ 00:18:37.146 { 00:18:37.146 "nsid": 1, 00:18:37.146 "bdev_name": "Malloc1", 00:18:37.146 "name": "Malloc1", 00:18:37.146 "nguid": "4820BDD694BC40BD8F2368E7E35AE8AD", 00:18:37.146 "uuid": "4820bdd6-94bc-40bd-8f23-68e7e35ae8ad" 00:18:37.146 }, 00:18:37.146 { 00:18:37.146 "nsid": 2, 00:18:37.146 "bdev_name": "Malloc3", 00:18:37.146 "name": "Malloc3", 00:18:37.146 "nguid": "8FBB09051194415AB12CFA8C273DB026", 00:18:37.146 "uuid": "8fbb0905-1194-415a-b12c-fa8c273db026" 00:18:37.146 } 00:18:37.146 ] 00:18:37.146 }, 00:18:37.146 { 00:18:37.146 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:37.146 "subtype": "NVMe", 00:18:37.146 "listen_addresses": [ 00:18:37.146 { 00:18:37.146 "trtype": "VFIOUSER", 00:18:37.146 "adrfam": "IPv4", 00:18:37.146 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:37.146 "trsvcid": "0" 00:18:37.146 } 00:18:37.146 ], 00:18:37.146 "allow_any_host": true, 00:18:37.146 "hosts": [], 00:18:37.146 "serial_number": "SPDK2", 00:18:37.146 "model_number": "SPDK bdev Controller", 00:18:37.146 "max_namespaces": 32, 00:18:37.146 "min_cntlid": 1, 00:18:37.146 "max_cntlid": 65519, 00:18:37.146 "namespaces": [ 00:18:37.146 { 00:18:37.146 "nsid": 1, 00:18:37.146 "bdev_name": "Malloc2", 00:18:37.146 "name": "Malloc2", 00:18:37.146 "nguid": "C2433792DE614E7490B1468E0E22FAEA", 00:18:37.146 "uuid": "c2433792-de61-4e74-90b1-468e0e22faea" 00:18:37.146 } 00:18:37.146 ] 00:18:37.146 } 00:18:37.146 ] 00:18:37.146 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:37.146 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=303888 00:18:37.146 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:37.146 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:37.146 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:37.146 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:37.146 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:37.146 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:37.146 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:37.438 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:37.438 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:37.438 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:37.438 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:37.438 [2024-12-13 05:34:37.270815] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:37.438 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:37.438 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:37.438 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:37.438 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:37.438 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:37.730 Malloc4 00:18:37.730 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:37.730 [2024-12-13 05:34:37.710201] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:37.730 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:38.021 Asynchronous Event Request test 00:18:38.021 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:38.021 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:38.021 Registering asynchronous event callbacks... 00:18:38.021 Starting namespace attribute notice tests for all controllers... 00:18:38.021 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:38.021 aer_cb - Changed Namespace 00:18:38.021 Cleaning up... 00:18:38.021 [ 00:18:38.021 { 00:18:38.021 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:38.021 "subtype": "Discovery", 00:18:38.021 "listen_addresses": [], 00:18:38.021 "allow_any_host": true, 00:18:38.021 "hosts": [] 00:18:38.021 }, 00:18:38.021 { 00:18:38.021 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:38.021 "subtype": "NVMe", 00:18:38.021 "listen_addresses": [ 00:18:38.021 { 00:18:38.021 "trtype": "VFIOUSER", 00:18:38.021 "adrfam": "IPv4", 00:18:38.021 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:38.021 "trsvcid": "0" 00:18:38.021 } 00:18:38.021 ], 00:18:38.021 "allow_any_host": true, 00:18:38.021 "hosts": [], 00:18:38.021 "serial_number": "SPDK1", 00:18:38.021 "model_number": "SPDK bdev Controller", 00:18:38.021 "max_namespaces": 32, 00:18:38.021 "min_cntlid": 1, 00:18:38.021 "max_cntlid": 65519, 00:18:38.021 "namespaces": [ 00:18:38.021 { 00:18:38.021 "nsid": 1, 00:18:38.021 "bdev_name": "Malloc1", 00:18:38.021 "name": "Malloc1", 00:18:38.021 "nguid": "4820BDD694BC40BD8F2368E7E35AE8AD", 00:18:38.021 "uuid": "4820bdd6-94bc-40bd-8f23-68e7e35ae8ad" 00:18:38.021 }, 00:18:38.021 { 00:18:38.021 "nsid": 2, 00:18:38.021 "bdev_name": "Malloc3", 00:18:38.021 "name": "Malloc3", 00:18:38.021 "nguid": "8FBB09051194415AB12CFA8C273DB026", 00:18:38.021 "uuid": "8fbb0905-1194-415a-b12c-fa8c273db026" 00:18:38.021 } 00:18:38.021 ] 00:18:38.021 }, 00:18:38.021 { 00:18:38.021 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:38.021 "subtype": "NVMe", 00:18:38.021 "listen_addresses": [ 00:18:38.021 { 00:18:38.021 "trtype": "VFIOUSER", 00:18:38.021 "adrfam": "IPv4", 00:18:38.021 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:38.021 "trsvcid": "0" 00:18:38.021 } 00:18:38.021 ], 00:18:38.021 "allow_any_host": true, 00:18:38.021 "hosts": [], 00:18:38.021 "serial_number": "SPDK2", 00:18:38.021 "model_number": "SPDK bdev Controller", 00:18:38.021 "max_namespaces": 32, 00:18:38.021 "min_cntlid": 1, 00:18:38.021 "max_cntlid": 65519, 00:18:38.021 "namespaces": [ 00:18:38.021 { 00:18:38.021 "nsid": 1, 00:18:38.021 "bdev_name": "Malloc2", 00:18:38.021 "name": "Malloc2", 00:18:38.021 "nguid": "C2433792DE614E7490B1468E0E22FAEA", 00:18:38.021 "uuid": "c2433792-de61-4e74-90b1-468e0e22faea" 00:18:38.021 }, 00:18:38.021 { 00:18:38.021 "nsid": 2, 00:18:38.021 "bdev_name": "Malloc4", 00:18:38.021 "name": "Malloc4", 00:18:38.021 "nguid": "647E28371E734078996EEC5D9493CCDA", 00:18:38.021 "uuid": "647e2837-1e73-4078-996e-ec5d9493ccda" 00:18:38.021 } 00:18:38.021 ] 00:18:38.021 } 00:18:38.021 ] 00:18:38.021 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 303888 00:18:38.021 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:38.021 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 296194 00:18:38.021 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 296194 ']' 00:18:38.021 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 296194 00:18:38.022 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:38.022 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.022 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 296194 00:18:38.022 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.022 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.022 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 296194' 00:18:38.022 killing process with pid 296194 00:18:38.022 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 296194 00:18:38.022 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 296194 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=304125 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 304125' 00:18:38.301 Process pid: 304125 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 304125 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 304125 ']' 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.301 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:38.301 [2024-12-13 05:34:38.276760] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:38.301 [2024-12-13 05:34:38.277652] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:38.301 [2024-12-13 05:34:38.277693] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.560 [2024-12-13 05:34:38.351465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:38.560 [2024-12-13 05:34:38.372444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.560 [2024-12-13 05:34:38.372502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.560 [2024-12-13 05:34:38.372509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.560 [2024-12-13 05:34:38.372515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.560 [2024-12-13 05:34:38.372519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.560 [2024-12-13 05:34:38.373996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.560 [2024-12-13 05:34:38.374103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.560 [2024-12-13 05:34:38.374221] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.560 [2024-12-13 05:34:38.374222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:38.560 [2024-12-13 05:34:38.437798] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:38.560 [2024-12-13 05:34:38.438586] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:38.561 [2024-12-13 05:34:38.438862] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:38.561 [2024-12-13 05:34:38.439287] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:38.561 [2024-12-13 05:34:38.439319] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:38.561 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.561 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:38.561 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:39.498 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:39.757 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:39.757 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:39.757 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:39.757 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:39.757 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:40.016 Malloc1 00:18:40.016 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:40.276 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:40.535 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:40.535 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:40.535 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:40.535 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:40.793 Malloc2 00:18:40.794 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:41.053 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:41.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:41.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:41.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 304125 00:18:41.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 304125 ']' 00:18:41.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 304125 00:18:41.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:41.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 304125 00:18:41.571 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.571 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.571 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 304125' 00:18:41.571 killing process with pid 304125 00:18:41.571 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 304125 00:18:41.571 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 304125 00:18:41.571 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:41.571 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:41.571 00:18:41.571 real 0m51.136s 00:18:41.571 user 3m17.899s 00:18:41.571 sys 0m3.321s 00:18:41.571 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.571 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:41.571 ************************************ 00:18:41.571 END TEST nvmf_vfio_user 00:18:41.571 ************************************ 00:18:41.831 05:34:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:41.831 05:34:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:41.831 05:34:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.831 05:34:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:41.831 ************************************ 00:18:41.831 START TEST nvmf_vfio_user_nvme_compliance 00:18:41.831 ************************************ 00:18:41.831 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:41.831 * Looking for test storage... 00:18:41.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:41.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.832 --rc genhtml_branch_coverage=1 00:18:41.832 --rc genhtml_function_coverage=1 00:18:41.832 --rc genhtml_legend=1 00:18:41.832 --rc geninfo_all_blocks=1 00:18:41.832 --rc geninfo_unexecuted_blocks=1 00:18:41.832 00:18:41.832 ' 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:41.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.832 --rc genhtml_branch_coverage=1 00:18:41.832 --rc genhtml_function_coverage=1 00:18:41.832 --rc genhtml_legend=1 00:18:41.832 --rc geninfo_all_blocks=1 00:18:41.832 --rc geninfo_unexecuted_blocks=1 00:18:41.832 00:18:41.832 ' 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:41.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.832 --rc genhtml_branch_coverage=1 00:18:41.832 --rc genhtml_function_coverage=1 00:18:41.832 --rc genhtml_legend=1 00:18:41.832 --rc geninfo_all_blocks=1 00:18:41.832 --rc geninfo_unexecuted_blocks=1 00:18:41.832 00:18:41.832 ' 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:41.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.832 --rc genhtml_branch_coverage=1 00:18:41.832 --rc genhtml_function_coverage=1 00:18:41.832 --rc genhtml_legend=1 00:18:41.832 --rc geninfo_all_blocks=1 00:18:41.832 --rc geninfo_unexecuted_blocks=1 00:18:41.832 00:18:41.832 ' 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.832 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:41.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:41.833 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:42.092 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=304668 00:18:42.092 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 304668' 00:18:42.092 Process pid: 304668 00:18:42.092 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:42.092 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:42.092 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 304668 00:18:42.092 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 304668 ']' 00:18:42.092 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.092 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.092 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.092 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.092 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:42.092 [2024-12-13 05:34:41.897370] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:42.092 [2024-12-13 05:34:41.897417] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.092 [2024-12-13 05:34:41.969023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:42.092 [2024-12-13 05:34:41.990531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.092 [2024-12-13 05:34:41.990570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.092 [2024-12-13 05:34:41.990577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.092 [2024-12-13 05:34:41.990582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.092 [2024-12-13 05:34:41.990587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.092 [2024-12-13 05:34:41.991794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.092 [2024-12-13 05:34:41.991903] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.092 [2024-12-13 05:34:41.991905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.092 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.092 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:42.093 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:43.471 malloc0 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.471 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:43.471 00:18:43.471 00:18:43.471 CUnit - A unit testing framework for C - Version 2.1-3 00:18:43.471 http://cunit.sourceforge.net/ 00:18:43.471 00:18:43.471 00:18:43.471 Suite: nvme_compliance 00:18:43.471 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-13 05:34:43.337903] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:43.471 [2024-12-13 05:34:43.339253] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:43.471 [2024-12-13 05:34:43.339269] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:43.471 [2024-12-13 05:34:43.339275] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:43.471 [2024-12-13 05:34:43.340928] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:43.471 passed 00:18:43.471 Test: admin_identify_ctrlr_verify_fused ...[2024-12-13 05:34:43.417509] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:43.471 [2024-12-13 05:34:43.421534] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:43.471 passed 00:18:43.731 Test: admin_identify_ns ...[2024-12-13 05:34:43.500723] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:43.731 [2024-12-13 05:34:43.561461] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:43.731 [2024-12-13 05:34:43.569462] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:43.731 [2024-12-13 05:34:43.590556] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:43.731 passed 00:18:43.731 Test: admin_get_features_mandatory_features ...[2024-12-13 05:34:43.664390] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:43.731 [2024-12-13 05:34:43.669415] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:43.731 passed 00:18:43.990 Test: admin_get_features_optional_features ...[2024-12-13 05:34:43.747941] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:43.990 [2024-12-13 05:34:43.750966] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:43.990 passed 00:18:43.990 Test: admin_set_features_number_of_queues ...[2024-12-13 05:34:43.825670] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:43.990 [2024-12-13 05:34:43.934536] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:43.990 passed 00:18:44.249 Test: admin_get_log_page_mandatory_logs ...[2024-12-13 05:34:44.007203] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:44.249 [2024-12-13 05:34:44.010226] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:44.249 passed 00:18:44.249 Test: admin_get_log_page_with_lpo ...[2024-12-13 05:34:44.086871] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:44.249 [2024-12-13 05:34:44.155464] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:44.249 [2024-12-13 05:34:44.168513] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:44.249 passed 00:18:44.249 Test: fabric_property_get ...[2024-12-13 05:34:44.245228] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:44.249 [2024-12-13 05:34:44.246474] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:44.249 [2024-12-13 05:34:44.248254] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:44.508 passed 00:18:44.508 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-13 05:34:44.323784] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:44.508 [2024-12-13 05:34:44.325021] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:44.508 [2024-12-13 05:34:44.327810] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:44.508 passed 00:18:44.508 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-13 05:34:44.403632] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:44.508 [2024-12-13 05:34:44.491455] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:44.508 [2024-12-13 05:34:44.507452] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:44.508 [2024-12-13 05:34:44.512522] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:44.767 passed 00:18:44.767 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-13 05:34:44.585261] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:44.767 [2024-12-13 05:34:44.586502] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:44.767 [2024-12-13 05:34:44.590301] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:44.767 passed 00:18:44.767 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-13 05:34:44.665629] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:44.767 [2024-12-13 05:34:44.745463] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:44.767 [2024-12-13 05:34:44.769460] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:44.767 [2024-12-13 05:34:44.774534] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:45.027 passed 00:18:45.027 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-13 05:34:44.848408] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:45.027 [2024-12-13 05:34:44.849638] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:45.027 [2024-12-13 05:34:44.849662] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:45.027 [2024-12-13 05:34:44.851427] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:45.027 passed 00:18:45.027 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-13 05:34:44.928109] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:45.027 [2024-12-13 05:34:45.021457] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:45.027 [2024-12-13 05:34:45.029459] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:45.027 [2024-12-13 05:34:45.037473] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:45.285 [2024-12-13 05:34:45.045457] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:45.285 [2024-12-13 05:34:45.074546] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:45.285 passed 00:18:45.285 Test: admin_create_io_sq_verify_pc ...[2024-12-13 05:34:45.148364] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:45.285 [2024-12-13 05:34:45.163462] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:45.285 [2024-12-13 05:34:45.181522] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:45.285 passed 00:18:45.285 Test: admin_create_io_qp_max_qps ...[2024-12-13 05:34:45.260041] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:46.663 [2024-12-13 05:34:46.351461] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:46.922 [2024-12-13 05:34:46.744386] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:46.922 passed 00:18:46.922 Test: admin_create_io_sq_shared_cq ...[2024-12-13 05:34:46.821643] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:47.182 [2024-12-13 05:34:46.953452] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:47.182 [2024-12-13 05:34:46.990521] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:47.182 passed 00:18:47.182 00:18:47.182 Run Summary: Type Total Ran Passed Failed Inactive 00:18:47.182 suites 1 1 n/a 0 0 00:18:47.182 tests 18 18 18 0 0 00:18:47.182 asserts 360 360 360 0 n/a 00:18:47.182 00:18:47.182 Elapsed time = 1.500 seconds 00:18:47.182 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 304668 00:18:47.182 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 304668 ']' 00:18:47.182 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 304668 00:18:47.182 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:47.182 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.182 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 304668 00:18:47.182 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.182 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.182 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 304668' 00:18:47.182 killing process with pid 304668 00:18:47.182 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 304668 00:18:47.182 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 304668 00:18:47.442 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:47.442 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:47.442 00:18:47.442 real 0m5.627s 00:18:47.442 user 0m15.768s 00:18:47.442 sys 0m0.511s 00:18:47.442 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.442 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:47.442 ************************************ 00:18:47.442 END TEST nvmf_vfio_user_nvme_compliance 00:18:47.442 ************************************ 00:18:47.442 05:34:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:47.442 05:34:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:47.442 05:34:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.442 05:34:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:47.442 ************************************ 00:18:47.442 START TEST nvmf_vfio_user_fuzz 00:18:47.442 ************************************ 00:18:47.442 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:47.442 * Looking for test storage... 00:18:47.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:47.442 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:47.442 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:18:47.442 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:47.702 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:47.702 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.702 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.702 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.702 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.702 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.702 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:47.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.703 --rc genhtml_branch_coverage=1 00:18:47.703 --rc genhtml_function_coverage=1 00:18:47.703 --rc genhtml_legend=1 00:18:47.703 --rc geninfo_all_blocks=1 00:18:47.703 --rc geninfo_unexecuted_blocks=1 00:18:47.703 00:18:47.703 ' 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:47.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.703 --rc genhtml_branch_coverage=1 00:18:47.703 --rc genhtml_function_coverage=1 00:18:47.703 --rc genhtml_legend=1 00:18:47.703 --rc geninfo_all_blocks=1 00:18:47.703 --rc geninfo_unexecuted_blocks=1 00:18:47.703 00:18:47.703 ' 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:47.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.703 --rc genhtml_branch_coverage=1 00:18:47.703 --rc genhtml_function_coverage=1 00:18:47.703 --rc genhtml_legend=1 00:18:47.703 --rc geninfo_all_blocks=1 00:18:47.703 --rc geninfo_unexecuted_blocks=1 00:18:47.703 00:18:47.703 ' 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:47.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.703 --rc genhtml_branch_coverage=1 00:18:47.703 --rc genhtml_function_coverage=1 00:18:47.703 --rc genhtml_legend=1 00:18:47.703 --rc geninfo_all_blocks=1 00:18:47.703 --rc geninfo_unexecuted_blocks=1 00:18:47.703 00:18:47.703 ' 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:47.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:47.703 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=305637 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 305637' 00:18:47.704 Process pid: 305637 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 305637 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 305637 ']' 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.704 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:47.963 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.963 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:47.963 05:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:48.900 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:48.901 malloc0 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:48.901 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:20.976 Fuzzing completed. Shutting down the fuzz application 00:19:20.976 00:19:20.976 Dumping successful admin opcodes: 00:19:20.976 9, 10, 00:19:20.976 Dumping successful io opcodes: 00:19:20.976 0, 00:19:20.976 NS: 0x20000081ef00 I/O qp, Total commands completed: 1153112, total successful commands: 4534, random_seed: 1015089216 00:19:20.976 NS: 0x20000081ef00 admin qp, Total commands completed: 282640, total successful commands: 65, random_seed: 979501632 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 305637 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 305637 ']' 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 305637 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305637 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305637' 00:19:20.976 killing process with pid 305637 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 305637 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 305637 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:20.976 00:19:20.976 real 0m32.172s 00:19:20.976 user 0m34.553s 00:19:20.976 sys 0m26.561s 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:20.976 ************************************ 00:19:20.976 END TEST nvmf_vfio_user_fuzz 00:19:20.976 ************************************ 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:20.976 ************************************ 00:19:20.976 START TEST nvmf_auth_target 00:19:20.976 ************************************ 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:20.976 * Looking for test storage... 00:19:20.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:20.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.976 --rc genhtml_branch_coverage=1 00:19:20.976 --rc genhtml_function_coverage=1 00:19:20.976 --rc genhtml_legend=1 00:19:20.976 --rc geninfo_all_blocks=1 00:19:20.976 --rc geninfo_unexecuted_blocks=1 00:19:20.976 00:19:20.976 ' 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:20.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.976 --rc genhtml_branch_coverage=1 00:19:20.976 --rc genhtml_function_coverage=1 00:19:20.976 --rc genhtml_legend=1 00:19:20.976 --rc geninfo_all_blocks=1 00:19:20.976 --rc geninfo_unexecuted_blocks=1 00:19:20.976 00:19:20.976 ' 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:20.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.976 --rc genhtml_branch_coverage=1 00:19:20.976 --rc genhtml_function_coverage=1 00:19:20.976 --rc genhtml_legend=1 00:19:20.976 --rc geninfo_all_blocks=1 00:19:20.976 --rc geninfo_unexecuted_blocks=1 00:19:20.976 00:19:20.976 ' 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:20.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.976 --rc genhtml_branch_coverage=1 00:19:20.976 --rc genhtml_function_coverage=1 00:19:20.976 --rc genhtml_legend=1 00:19:20.976 --rc geninfo_all_blocks=1 00:19:20.976 --rc geninfo_unexecuted_blocks=1 00:19:20.976 00:19:20.976 ' 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.976 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:20.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:20.977 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:26.252 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:26.253 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:26.253 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:26.253 Found net devices under 0000:af:00.0: cvl_0_0 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:26.253 Found net devices under 0000:af:00.1: cvl_0_1 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:26.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:19:26.253 00:19:26.253 --- 10.0.0.2 ping statistics --- 00:19:26.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.253 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:19:26.253 00:19:26.253 --- 10.0.0.1 ping statistics --- 00:19:26.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.253 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.253 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=313939 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 313939 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 313939 ']' 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=313960 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:26.254 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7c22a93760f111d047dbad3f4dbf173487e5d945530c84b0 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rzN 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7c22a93760f111d047dbad3f4dbf173487e5d945530c84b0 0 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7c22a93760f111d047dbad3f4dbf173487e5d945530c84b0 0 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7c22a93760f111d047dbad3f4dbf173487e5d945530c84b0 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rzN 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rzN 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.rzN 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=995273e6f9305978d99159fd79faf0cda46c2116a9a8b59a2d561f71e1a1cc9d 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iFR 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 995273e6f9305978d99159fd79faf0cda46c2116a9a8b59a2d561f71e1a1cc9d 3 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 995273e6f9305978d99159fd79faf0cda46c2116a9a8b59a2d561f71e1a1cc9d 3 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=995273e6f9305978d99159fd79faf0cda46c2116a9a8b59a2d561f71e1a1cc9d 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iFR 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iFR 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.iFR 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=974ac7241ad9880dbdef47b68fd3148b 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.VYA 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 974ac7241ad9880dbdef47b68fd3148b 1 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 974ac7241ad9880dbdef47b68fd3148b 1 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=974ac7241ad9880dbdef47b68fd3148b 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.VYA 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.VYA 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.VYA 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=794f760774225079e1729d3ee5e08357a2f3211c876a6bd0 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Djx 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 794f760774225079e1729d3ee5e08357a2f3211c876a6bd0 2 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 794f760774225079e1729d3ee5e08357a2f3211c876a6bd0 2 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=794f760774225079e1729d3ee5e08357a2f3211c876a6bd0 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Djx 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Djx 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Djx 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3311573d8d7a5fc6c2227b0909c2ca4e6967843410dd05c6 00:19:26.254 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:26.255 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1WW 00:19:26.255 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3311573d8d7a5fc6c2227b0909c2ca4e6967843410dd05c6 2 00:19:26.255 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3311573d8d7a5fc6c2227b0909c2ca4e6967843410dd05c6 2 00:19:26.255 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.255 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.255 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3311573d8d7a5fc6c2227b0909c2ca4e6967843410dd05c6 00:19:26.255 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:26.255 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1WW 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1WW 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.1WW 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a36631454a0e6fff198dad72d29197dd 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.gjU 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a36631454a0e6fff198dad72d29197dd 1 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a36631454a0e6fff198dad72d29197dd 1 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a36631454a0e6fff198dad72d29197dd 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.gjU 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.gjU 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.gjU 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c144b742a02c970c6ac1158e9502a28f842571ac79eeb3b334e5dbe2ca65fa84 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Z19 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c144b742a02c970c6ac1158e9502a28f842571ac79eeb3b334e5dbe2ca65fa84 3 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c144b742a02c970c6ac1158e9502a28f842571ac79eeb3b334e5dbe2ca65fa84 3 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c144b742a02c970c6ac1158e9502a28f842571ac79eeb3b334e5dbe2ca65fa84 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Z19 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Z19 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Z19 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 313939 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 313939 ']' 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.514 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.773 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.773 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:26.773 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 313960 /var/tmp/host.sock 00:19:26.773 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 313960 ']' 00:19:26.773 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:26.773 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.773 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:26.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:26.773 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.773 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.032 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.032 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:27.032 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:27.032 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.032 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.032 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.032 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:27.032 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rzN 00:19:27.032 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.032 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.032 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.032 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.rzN 00:19:27.032 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.rzN 00:19:27.291 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.iFR ]] 00:19:27.291 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iFR 00:19:27.291 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.291 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.291 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.291 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iFR 00:19:27.291 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iFR 00:19:27.291 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:27.291 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.VYA 00:19:27.291 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.291 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.550 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.550 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.VYA 00:19:27.550 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.VYA 00:19:27.550 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Djx ]] 00:19:27.550 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Djx 00:19:27.550 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.550 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.550 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.550 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Djx 00:19:27.550 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Djx 00:19:27.809 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:27.809 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.1WW 00:19:27.809 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.809 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.809 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.809 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.1WW 00:19:27.809 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.1WW 00:19:28.067 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.gjU ]] 00:19:28.067 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gjU 00:19:28.067 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.067 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.068 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.068 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gjU 00:19:28.068 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gjU 00:19:28.326 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:28.326 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Z19 00:19:28.326 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.326 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.326 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.326 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Z19 00:19:28.326 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Z19 00:19:28.326 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:28.326 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:28.326 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.326 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.326 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:28.326 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:28.585 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:28.585 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.585 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:28.585 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:28.585 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:28.585 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.585 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.585 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.585 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.585 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.585 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.585 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.585 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.844 00:19:28.844 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.844 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.844 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.103 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.103 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.103 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.103 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.103 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.103 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.103 { 00:19:29.103 "cntlid": 1, 00:19:29.103 "qid": 0, 00:19:29.103 "state": "enabled", 00:19:29.103 "thread": "nvmf_tgt_poll_group_000", 00:19:29.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:29.103 "listen_address": { 00:19:29.103 "trtype": "TCP", 00:19:29.103 "adrfam": "IPv4", 00:19:29.103 "traddr": "10.0.0.2", 00:19:29.103 "trsvcid": "4420" 00:19:29.103 }, 00:19:29.103 "peer_address": { 00:19:29.103 "trtype": "TCP", 00:19:29.103 "adrfam": "IPv4", 00:19:29.103 "traddr": "10.0.0.1", 00:19:29.103 "trsvcid": "46670" 00:19:29.103 }, 00:19:29.103 "auth": { 00:19:29.103 "state": "completed", 00:19:29.103 "digest": "sha256", 00:19:29.103 "dhgroup": "null" 00:19:29.103 } 00:19:29.103 } 00:19:29.103 ]' 00:19:29.103 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.103 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.103 05:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.103 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:29.103 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.103 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.103 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.103 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.362 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:19:29.362 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:19:32.648 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.907 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.166 00:19:33.166 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.166 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.166 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.425 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.425 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.425 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.425 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.425 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.425 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.425 { 00:19:33.425 "cntlid": 3, 00:19:33.425 "qid": 0, 00:19:33.425 "state": "enabled", 00:19:33.425 "thread": "nvmf_tgt_poll_group_000", 00:19:33.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:33.425 "listen_address": { 00:19:33.425 "trtype": "TCP", 00:19:33.425 "adrfam": "IPv4", 00:19:33.425 "traddr": "10.0.0.2", 00:19:33.425 "trsvcid": "4420" 00:19:33.425 }, 00:19:33.425 "peer_address": { 00:19:33.425 "trtype": "TCP", 00:19:33.425 "adrfam": "IPv4", 00:19:33.425 "traddr": "10.0.0.1", 00:19:33.425 "trsvcid": "46698" 00:19:33.425 }, 00:19:33.425 "auth": { 00:19:33.425 "state": "completed", 00:19:33.425 "digest": "sha256", 00:19:33.425 "dhgroup": "null" 00:19:33.425 } 00:19:33.425 } 00:19:33.425 ]' 00:19:33.425 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.425 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.425 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.425 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:33.425 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.425 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.684 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.684 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.684 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:19:33.684 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:19:34.251 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.251 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:34.251 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.251 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.251 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.251 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.251 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.251 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.510 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:34.510 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.510 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.510 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:34.510 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:34.510 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.510 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.510 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.510 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.510 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.510 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.510 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.510 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.769 00:19:34.769 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.769 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.769 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.027 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.027 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.027 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.028 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.028 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.028 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.028 { 00:19:35.028 "cntlid": 5, 00:19:35.028 "qid": 0, 00:19:35.028 "state": "enabled", 00:19:35.028 "thread": "nvmf_tgt_poll_group_000", 00:19:35.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:35.028 "listen_address": { 00:19:35.028 "trtype": "TCP", 00:19:35.028 "adrfam": "IPv4", 00:19:35.028 "traddr": "10.0.0.2", 00:19:35.028 "trsvcid": "4420" 00:19:35.028 }, 00:19:35.028 "peer_address": { 00:19:35.028 "trtype": "TCP", 00:19:35.028 "adrfam": "IPv4", 00:19:35.028 "traddr": "10.0.0.1", 00:19:35.028 "trsvcid": "46712" 00:19:35.028 }, 00:19:35.028 "auth": { 00:19:35.028 "state": "completed", 00:19:35.028 "digest": "sha256", 00:19:35.028 "dhgroup": "null" 00:19:35.028 } 00:19:35.028 } 00:19:35.028 ]' 00:19:35.028 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.028 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.028 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.028 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:35.028 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.028 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.028 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.028 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.286 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:19:35.286 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:19:35.852 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.852 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:35.852 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.852 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.852 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.852 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.852 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:35.852 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:36.110 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:36.110 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.110 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:36.110 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:36.110 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.110 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.110 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:36.110 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.110 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.110 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.110 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.110 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.110 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.369 00:19:36.369 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.369 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.369 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.628 { 00:19:36.628 "cntlid": 7, 00:19:36.628 "qid": 0, 00:19:36.628 "state": "enabled", 00:19:36.628 "thread": "nvmf_tgt_poll_group_000", 00:19:36.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:36.628 "listen_address": { 00:19:36.628 "trtype": "TCP", 00:19:36.628 "adrfam": "IPv4", 00:19:36.628 "traddr": "10.0.0.2", 00:19:36.628 "trsvcid": "4420" 00:19:36.628 }, 00:19:36.628 "peer_address": { 00:19:36.628 "trtype": "TCP", 00:19:36.628 "adrfam": "IPv4", 00:19:36.628 "traddr": "10.0.0.1", 00:19:36.628 "trsvcid": "46746" 00:19:36.628 }, 00:19:36.628 "auth": { 00:19:36.628 "state": "completed", 00:19:36.628 "digest": "sha256", 00:19:36.628 "dhgroup": "null" 00:19:36.628 } 00:19:36.628 } 00:19:36.628 ]' 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.628 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.887 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:19:36.887 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:19:37.451 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.451 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:37.451 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.451 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.451 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.451 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.451 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.451 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.451 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.709 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:37.709 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.709 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.709 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:37.709 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:37.709 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.709 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.709 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.709 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.709 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.709 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.709 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.710 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.968 00:19:37.968 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.968 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.968 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.227 { 00:19:38.227 "cntlid": 9, 00:19:38.227 "qid": 0, 00:19:38.227 "state": "enabled", 00:19:38.227 "thread": "nvmf_tgt_poll_group_000", 00:19:38.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:38.227 "listen_address": { 00:19:38.227 "trtype": "TCP", 00:19:38.227 "adrfam": "IPv4", 00:19:38.227 "traddr": "10.0.0.2", 00:19:38.227 "trsvcid": "4420" 00:19:38.227 }, 00:19:38.227 "peer_address": { 00:19:38.227 "trtype": "TCP", 00:19:38.227 "adrfam": "IPv4", 00:19:38.227 "traddr": "10.0.0.1", 00:19:38.227 "trsvcid": "41274" 00:19:38.227 }, 00:19:38.227 "auth": { 00:19:38.227 "state": "completed", 00:19:38.227 "digest": "sha256", 00:19:38.227 "dhgroup": "ffdhe2048" 00:19:38.227 } 00:19:38.227 } 00:19:38.227 ]' 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.227 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.486 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:19:38.486 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:19:39.053 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.053 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:39.053 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.053 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.053 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.053 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.053 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.053 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.312 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:39.312 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.312 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.312 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:39.312 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:39.312 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.312 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.312 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.312 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.312 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.312 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.312 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.312 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.571 00:19:39.571 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.571 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.571 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.830 { 00:19:39.830 "cntlid": 11, 00:19:39.830 "qid": 0, 00:19:39.830 "state": "enabled", 00:19:39.830 "thread": "nvmf_tgt_poll_group_000", 00:19:39.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:39.830 "listen_address": { 00:19:39.830 "trtype": "TCP", 00:19:39.830 "adrfam": "IPv4", 00:19:39.830 "traddr": "10.0.0.2", 00:19:39.830 "trsvcid": "4420" 00:19:39.830 }, 00:19:39.830 "peer_address": { 00:19:39.830 "trtype": "TCP", 00:19:39.830 "adrfam": "IPv4", 00:19:39.830 "traddr": "10.0.0.1", 00:19:39.830 "trsvcid": "41304" 00:19:39.830 }, 00:19:39.830 "auth": { 00:19:39.830 "state": "completed", 00:19:39.830 "digest": "sha256", 00:19:39.830 "dhgroup": "ffdhe2048" 00:19:39.830 } 00:19:39.830 } 00:19:39.830 ]' 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.830 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.088 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:19:40.088 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:19:40.655 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.655 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:40.655 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.655 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.655 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.655 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.655 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.655 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.914 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:40.914 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.914 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.914 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:40.914 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:40.914 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.914 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.914 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.914 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.914 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.914 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.914 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.914 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.173 00:19:41.173 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.173 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.173 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.173 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.173 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.173 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.173 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.173 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.173 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.173 { 00:19:41.173 "cntlid": 13, 00:19:41.173 "qid": 0, 00:19:41.173 "state": "enabled", 00:19:41.173 "thread": "nvmf_tgt_poll_group_000", 00:19:41.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:41.173 "listen_address": { 00:19:41.173 "trtype": "TCP", 00:19:41.173 "adrfam": "IPv4", 00:19:41.173 "traddr": "10.0.0.2", 00:19:41.173 "trsvcid": "4420" 00:19:41.173 }, 00:19:41.173 "peer_address": { 00:19:41.173 "trtype": "TCP", 00:19:41.173 "adrfam": "IPv4", 00:19:41.173 "traddr": "10.0.0.1", 00:19:41.173 "trsvcid": "41322" 00:19:41.173 }, 00:19:41.173 "auth": { 00:19:41.173 "state": "completed", 00:19:41.173 "digest": "sha256", 00:19:41.173 "dhgroup": "ffdhe2048" 00:19:41.173 } 00:19:41.173 } 00:19:41.173 ]' 00:19:41.173 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.432 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.432 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.432 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:41.432 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.432 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.432 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.432 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.691 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:19:41.691 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.258 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:42.517 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.517 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.517 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.517 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:42.517 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.517 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.517 00:19:42.776 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.776 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.776 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.776 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.776 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.776 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.776 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.776 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.776 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.776 { 00:19:42.776 "cntlid": 15, 00:19:42.776 "qid": 0, 00:19:42.776 "state": "enabled", 00:19:42.776 "thread": "nvmf_tgt_poll_group_000", 00:19:42.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:42.776 "listen_address": { 00:19:42.776 "trtype": "TCP", 00:19:42.776 "adrfam": "IPv4", 00:19:42.776 "traddr": "10.0.0.2", 00:19:42.776 "trsvcid": "4420" 00:19:42.776 }, 00:19:42.776 "peer_address": { 00:19:42.776 "trtype": "TCP", 00:19:42.776 "adrfam": "IPv4", 00:19:42.776 "traddr": "10.0.0.1", 00:19:42.776 "trsvcid": "41342" 00:19:42.776 }, 00:19:42.776 "auth": { 00:19:42.776 "state": "completed", 00:19:42.776 "digest": "sha256", 00:19:42.776 "dhgroup": "ffdhe2048" 00:19:42.776 } 00:19:42.776 } 00:19:42.776 ]' 00:19:42.776 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.776 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.776 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.035 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.035 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.035 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.035 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.035 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.035 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:19:43.035 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:19:43.602 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.602 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:43.602 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.602 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.861 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.120 00:19:44.120 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.120 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.120 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.379 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.379 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.379 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.379 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.379 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.379 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.379 { 00:19:44.379 "cntlid": 17, 00:19:44.379 "qid": 0, 00:19:44.379 "state": "enabled", 00:19:44.379 "thread": "nvmf_tgt_poll_group_000", 00:19:44.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:44.379 "listen_address": { 00:19:44.379 "trtype": "TCP", 00:19:44.379 "adrfam": "IPv4", 00:19:44.379 "traddr": "10.0.0.2", 00:19:44.379 "trsvcid": "4420" 00:19:44.379 }, 00:19:44.379 "peer_address": { 00:19:44.379 "trtype": "TCP", 00:19:44.379 "adrfam": "IPv4", 00:19:44.379 "traddr": "10.0.0.1", 00:19:44.379 "trsvcid": "41374" 00:19:44.379 }, 00:19:44.379 "auth": { 00:19:44.379 "state": "completed", 00:19:44.379 "digest": "sha256", 00:19:44.379 "dhgroup": "ffdhe3072" 00:19:44.379 } 00:19:44.379 } 00:19:44.379 ]' 00:19:44.379 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.379 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.379 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.638 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.638 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.638 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.638 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.638 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.638 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:19:44.638 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:19:45.206 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.206 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:45.206 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.206 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.206 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.206 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.206 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.206 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.465 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:45.465 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.465 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.465 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:45.465 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:45.465 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.465 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.465 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.465 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.465 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.465 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.465 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.465 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.724 00:19:45.724 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.724 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.724 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.982 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.982 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.982 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.982 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.982 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.982 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.982 { 00:19:45.982 "cntlid": 19, 00:19:45.982 "qid": 0, 00:19:45.982 "state": "enabled", 00:19:45.982 "thread": "nvmf_tgt_poll_group_000", 00:19:45.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:45.982 "listen_address": { 00:19:45.982 "trtype": "TCP", 00:19:45.982 "adrfam": "IPv4", 00:19:45.982 "traddr": "10.0.0.2", 00:19:45.982 "trsvcid": "4420" 00:19:45.982 }, 00:19:45.982 "peer_address": { 00:19:45.982 "trtype": "TCP", 00:19:45.982 "adrfam": "IPv4", 00:19:45.982 "traddr": "10.0.0.1", 00:19:45.982 "trsvcid": "41402" 00:19:45.982 }, 00:19:45.982 "auth": { 00:19:45.982 "state": "completed", 00:19:45.982 "digest": "sha256", 00:19:45.982 "dhgroup": "ffdhe3072" 00:19:45.982 } 00:19:45.982 } 00:19:45.983 ]' 00:19:45.983 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.983 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.983 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.983 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.983 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.241 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.242 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.242 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.242 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:19:46.242 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:19:46.809 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.809 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:46.809 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.809 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.809 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.809 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.809 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:46.809 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.068 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:47.068 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.068 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.068 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:47.068 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:47.068 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.068 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.068 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.068 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.068 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.068 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.069 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.069 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.328 00:19:47.328 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.328 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.328 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.586 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.586 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.586 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.586 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.586 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.586 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.586 { 00:19:47.586 "cntlid": 21, 00:19:47.586 "qid": 0, 00:19:47.586 "state": "enabled", 00:19:47.586 "thread": "nvmf_tgt_poll_group_000", 00:19:47.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:47.586 "listen_address": { 00:19:47.586 "trtype": "TCP", 00:19:47.586 "adrfam": "IPv4", 00:19:47.586 "traddr": "10.0.0.2", 00:19:47.586 "trsvcid": "4420" 00:19:47.586 }, 00:19:47.586 "peer_address": { 00:19:47.586 "trtype": "TCP", 00:19:47.586 "adrfam": "IPv4", 00:19:47.586 "traddr": "10.0.0.1", 00:19:47.586 "trsvcid": "36088" 00:19:47.586 }, 00:19:47.586 "auth": { 00:19:47.586 "state": "completed", 00:19:47.586 "digest": "sha256", 00:19:47.586 "dhgroup": "ffdhe3072" 00:19:47.586 } 00:19:47.586 } 00:19:47.586 ]' 00:19:47.586 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.586 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.586 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.586 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:47.586 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.845 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.845 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.845 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.845 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:19:47.845 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:19:48.412 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.412 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:48.412 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.412 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.412 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.412 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.412 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.412 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.671 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:48.671 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.671 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.671 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:48.671 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:48.671 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.671 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:48.671 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.671 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.671 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.671 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:48.671 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.671 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.930 00:19:48.930 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.930 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.930 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.189 { 00:19:49.189 "cntlid": 23, 00:19:49.189 "qid": 0, 00:19:49.189 "state": "enabled", 00:19:49.189 "thread": "nvmf_tgt_poll_group_000", 00:19:49.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:49.189 "listen_address": { 00:19:49.189 "trtype": "TCP", 00:19:49.189 "adrfam": "IPv4", 00:19:49.189 "traddr": "10.0.0.2", 00:19:49.189 "trsvcid": "4420" 00:19:49.189 }, 00:19:49.189 "peer_address": { 00:19:49.189 "trtype": "TCP", 00:19:49.189 "adrfam": "IPv4", 00:19:49.189 "traddr": "10.0.0.1", 00:19:49.189 "trsvcid": "36114" 00:19:49.189 }, 00:19:49.189 "auth": { 00:19:49.189 "state": "completed", 00:19:49.189 "digest": "sha256", 00:19:49.189 "dhgroup": "ffdhe3072" 00:19:49.189 } 00:19:49.189 } 00:19:49.189 ]' 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.189 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.448 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:19:49.448 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:19:50.016 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.016 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:50.016 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.016 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.016 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.016 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.016 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.016 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.016 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.275 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:50.275 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.275 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.275 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:50.275 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:50.275 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.275 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.275 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.275 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.275 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.275 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.275 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.275 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.534 00:19:50.534 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.534 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.534 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.793 { 00:19:50.793 "cntlid": 25, 00:19:50.793 "qid": 0, 00:19:50.793 "state": "enabled", 00:19:50.793 "thread": "nvmf_tgt_poll_group_000", 00:19:50.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:50.793 "listen_address": { 00:19:50.793 "trtype": "TCP", 00:19:50.793 "adrfam": "IPv4", 00:19:50.793 "traddr": "10.0.0.2", 00:19:50.793 "trsvcid": "4420" 00:19:50.793 }, 00:19:50.793 "peer_address": { 00:19:50.793 "trtype": "TCP", 00:19:50.793 "adrfam": "IPv4", 00:19:50.793 "traddr": "10.0.0.1", 00:19:50.793 "trsvcid": "36144" 00:19:50.793 }, 00:19:50.793 "auth": { 00:19:50.793 "state": "completed", 00:19:50.793 "digest": "sha256", 00:19:50.793 "dhgroup": "ffdhe4096" 00:19:50.793 } 00:19:50.793 } 00:19:50.793 ]' 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.793 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.052 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:19:51.052 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:19:51.619 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.619 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:51.619 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.619 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.619 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.619 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.619 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.619 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.878 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:51.878 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.878 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.878 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:51.878 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:51.878 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.878 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.878 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.878 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.878 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.878 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.878 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.878 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.137 00:19:52.137 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.137 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.137 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.396 { 00:19:52.396 "cntlid": 27, 00:19:52.396 "qid": 0, 00:19:52.396 "state": "enabled", 00:19:52.396 "thread": "nvmf_tgt_poll_group_000", 00:19:52.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:52.396 "listen_address": { 00:19:52.396 "trtype": "TCP", 00:19:52.396 "adrfam": "IPv4", 00:19:52.396 "traddr": "10.0.0.2", 00:19:52.396 "trsvcid": "4420" 00:19:52.396 }, 00:19:52.396 "peer_address": { 00:19:52.396 "trtype": "TCP", 00:19:52.396 "adrfam": "IPv4", 00:19:52.396 "traddr": "10.0.0.1", 00:19:52.396 "trsvcid": "36166" 00:19:52.396 }, 00:19:52.396 "auth": { 00:19:52.396 "state": "completed", 00:19:52.396 "digest": "sha256", 00:19:52.396 "dhgroup": "ffdhe4096" 00:19:52.396 } 00:19:52.396 } 00:19:52.396 ]' 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.396 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.655 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:19:52.655 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:19:53.222 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.222 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:53.222 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.222 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.222 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.222 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.222 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:53.222 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:53.480 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:53.481 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.481 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.481 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:53.481 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:53.481 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.481 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.481 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.481 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.481 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.481 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.481 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.481 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.740 00:19:53.740 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.740 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.740 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.998 { 00:19:53.998 "cntlid": 29, 00:19:53.998 "qid": 0, 00:19:53.998 "state": "enabled", 00:19:53.998 "thread": "nvmf_tgt_poll_group_000", 00:19:53.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:53.998 "listen_address": { 00:19:53.998 "trtype": "TCP", 00:19:53.998 "adrfam": "IPv4", 00:19:53.998 "traddr": "10.0.0.2", 00:19:53.998 "trsvcid": "4420" 00:19:53.998 }, 00:19:53.998 "peer_address": { 00:19:53.998 "trtype": "TCP", 00:19:53.998 "adrfam": "IPv4", 00:19:53.998 "traddr": "10.0.0.1", 00:19:53.998 "trsvcid": "36188" 00:19:53.998 }, 00:19:53.998 "auth": { 00:19:53.998 "state": "completed", 00:19:53.998 "digest": "sha256", 00:19:53.998 "dhgroup": "ffdhe4096" 00:19:53.998 } 00:19:53.998 } 00:19:53.998 ]' 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.998 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.256 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:19:54.256 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:19:54.822 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.822 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:54.822 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.822 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.822 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.822 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.822 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.822 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.081 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:55.081 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.081 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.081 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:55.081 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:55.081 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.081 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:55.081 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.081 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.081 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.081 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:55.081 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:55.081 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:55.338 00:19:55.338 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.338 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.338 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.596 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.596 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.596 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.596 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.596 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.596 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.596 { 00:19:55.596 "cntlid": 31, 00:19:55.596 "qid": 0, 00:19:55.596 "state": "enabled", 00:19:55.596 "thread": "nvmf_tgt_poll_group_000", 00:19:55.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:55.596 "listen_address": { 00:19:55.596 "trtype": "TCP", 00:19:55.596 "adrfam": "IPv4", 00:19:55.596 "traddr": "10.0.0.2", 00:19:55.596 "trsvcid": "4420" 00:19:55.596 }, 00:19:55.596 "peer_address": { 00:19:55.596 "trtype": "TCP", 00:19:55.596 "adrfam": "IPv4", 00:19:55.596 "traddr": "10.0.0.1", 00:19:55.596 "trsvcid": "36222" 00:19:55.596 }, 00:19:55.596 "auth": { 00:19:55.596 "state": "completed", 00:19:55.596 "digest": "sha256", 00:19:55.596 "dhgroup": "ffdhe4096" 00:19:55.596 } 00:19:55.596 } 00:19:55.596 ]' 00:19:55.596 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.596 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.597 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.597 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:55.597 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.597 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.597 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.597 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.855 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:19:55.855 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:19:56.422 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.422 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:56.422 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.422 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.422 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.422 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.422 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.422 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.422 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.681 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:56.681 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.681 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.681 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:56.681 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:56.681 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.681 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.681 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.681 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.681 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.681 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.681 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.681 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.940 00:19:56.940 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.940 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.940 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.199 { 00:19:57.199 "cntlid": 33, 00:19:57.199 "qid": 0, 00:19:57.199 "state": "enabled", 00:19:57.199 "thread": "nvmf_tgt_poll_group_000", 00:19:57.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:57.199 "listen_address": { 00:19:57.199 "trtype": "TCP", 00:19:57.199 "adrfam": "IPv4", 00:19:57.199 "traddr": "10.0.0.2", 00:19:57.199 "trsvcid": "4420" 00:19:57.199 }, 00:19:57.199 "peer_address": { 00:19:57.199 "trtype": "TCP", 00:19:57.199 "adrfam": "IPv4", 00:19:57.199 "traddr": "10.0.0.1", 00:19:57.199 "trsvcid": "59404" 00:19:57.199 }, 00:19:57.199 "auth": { 00:19:57.199 "state": "completed", 00:19:57.199 "digest": "sha256", 00:19:57.199 "dhgroup": "ffdhe6144" 00:19:57.199 } 00:19:57.199 } 00:19:57.199 ]' 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.199 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.458 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:19:57.458 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:19:58.025 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.025 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:58.025 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.025 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.025 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.025 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.025 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.025 05:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.284 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:58.284 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.284 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.284 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:58.284 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:58.284 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.284 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.284 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.284 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.284 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.284 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.284 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.284 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.543 00:19:58.543 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.543 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.543 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.802 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.802 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.802 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.802 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.802 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.802 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.802 { 00:19:58.802 "cntlid": 35, 00:19:58.802 "qid": 0, 00:19:58.802 "state": "enabled", 00:19:58.802 "thread": "nvmf_tgt_poll_group_000", 00:19:58.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:58.802 "listen_address": { 00:19:58.802 "trtype": "TCP", 00:19:58.802 "adrfam": "IPv4", 00:19:58.802 "traddr": "10.0.0.2", 00:19:58.802 "trsvcid": "4420" 00:19:58.802 }, 00:19:58.802 "peer_address": { 00:19:58.802 "trtype": "TCP", 00:19:58.802 "adrfam": "IPv4", 00:19:58.802 "traddr": "10.0.0.1", 00:19:58.802 "trsvcid": "59430" 00:19:58.802 }, 00:19:58.802 "auth": { 00:19:58.802 "state": "completed", 00:19:58.802 "digest": "sha256", 00:19:58.802 "dhgroup": "ffdhe6144" 00:19:58.802 } 00:19:58.802 } 00:19:58.802 ]' 00:19:58.802 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.802 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.802 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.061 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:59.061 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.061 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.061 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.061 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.061 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:19:59.061 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:19:59.629 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.629 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:59.629 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.629 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.629 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.629 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.629 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.629 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.887 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:59.887 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.887 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.887 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:59.887 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:59.887 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.887 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.887 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.887 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.887 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.887 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.887 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.887 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.146 00:20:00.404 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.404 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.404 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.404 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.404 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.404 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.404 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.404 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.404 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.404 { 00:20:00.404 "cntlid": 37, 00:20:00.404 "qid": 0, 00:20:00.404 "state": "enabled", 00:20:00.404 "thread": "nvmf_tgt_poll_group_000", 00:20:00.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:00.404 "listen_address": { 00:20:00.404 "trtype": "TCP", 00:20:00.404 "adrfam": "IPv4", 00:20:00.404 "traddr": "10.0.0.2", 00:20:00.404 "trsvcid": "4420" 00:20:00.404 }, 00:20:00.404 "peer_address": { 00:20:00.404 "trtype": "TCP", 00:20:00.404 "adrfam": "IPv4", 00:20:00.404 "traddr": "10.0.0.1", 00:20:00.404 "trsvcid": "59456" 00:20:00.404 }, 00:20:00.404 "auth": { 00:20:00.404 "state": "completed", 00:20:00.404 "digest": "sha256", 00:20:00.404 "dhgroup": "ffdhe6144" 00:20:00.404 } 00:20:00.404 } 00:20:00.404 ]' 00:20:00.404 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.663 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.663 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.663 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.663 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.663 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.663 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.663 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.922 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:00.922 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:01.488 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.488 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:01.488 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.488 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.488 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.488 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.488 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.488 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.747 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:01.747 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.747 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.747 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:01.747 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:01.747 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.747 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:01.747 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.747 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.747 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.747 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:01.747 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.747 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.006 00:20:02.006 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.006 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.006 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.264 { 00:20:02.264 "cntlid": 39, 00:20:02.264 "qid": 0, 00:20:02.264 "state": "enabled", 00:20:02.264 "thread": "nvmf_tgt_poll_group_000", 00:20:02.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:02.264 "listen_address": { 00:20:02.264 "trtype": "TCP", 00:20:02.264 "adrfam": "IPv4", 00:20:02.264 "traddr": "10.0.0.2", 00:20:02.264 "trsvcid": "4420" 00:20:02.264 }, 00:20:02.264 "peer_address": { 00:20:02.264 "trtype": "TCP", 00:20:02.264 "adrfam": "IPv4", 00:20:02.264 "traddr": "10.0.0.1", 00:20:02.264 "trsvcid": "59470" 00:20:02.264 }, 00:20:02.264 "auth": { 00:20:02.264 "state": "completed", 00:20:02.264 "digest": "sha256", 00:20:02.264 "dhgroup": "ffdhe6144" 00:20:02.264 } 00:20:02.264 } 00:20:02.264 ]' 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.264 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.523 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:02.523 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:03.090 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.090 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:03.090 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.090 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.090 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.090 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.090 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.090 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.090 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.349 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:03.349 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.349 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.349 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:03.349 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:03.349 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.349 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.349 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.349 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.349 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.349 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.349 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.349 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.917 00:20:03.917 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.917 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.917 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.917 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.917 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.917 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.917 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.917 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.917 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.917 { 00:20:03.917 "cntlid": 41, 00:20:03.917 "qid": 0, 00:20:03.917 "state": "enabled", 00:20:03.917 "thread": "nvmf_tgt_poll_group_000", 00:20:03.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:03.917 "listen_address": { 00:20:03.917 "trtype": "TCP", 00:20:03.917 "adrfam": "IPv4", 00:20:03.917 "traddr": "10.0.0.2", 00:20:03.917 "trsvcid": "4420" 00:20:03.917 }, 00:20:03.917 "peer_address": { 00:20:03.917 "trtype": "TCP", 00:20:03.917 "adrfam": "IPv4", 00:20:03.917 "traddr": "10.0.0.1", 00:20:03.917 "trsvcid": "59496" 00:20:03.917 }, 00:20:03.917 "auth": { 00:20:03.917 "state": "completed", 00:20:03.917 "digest": "sha256", 00:20:03.917 "dhgroup": "ffdhe8192" 00:20:03.917 } 00:20:03.917 } 00:20:03.917 ]' 00:20:03.917 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.176 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.176 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.176 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.176 05:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.176 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.176 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.176 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.434 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:04.434 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.001 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.002 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.002 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.002 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.002 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.002 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.002 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.569 00:20:05.569 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.569 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.569 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.828 { 00:20:05.828 "cntlid": 43, 00:20:05.828 "qid": 0, 00:20:05.828 "state": "enabled", 00:20:05.828 "thread": "nvmf_tgt_poll_group_000", 00:20:05.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:05.828 "listen_address": { 00:20:05.828 "trtype": "TCP", 00:20:05.828 "adrfam": "IPv4", 00:20:05.828 "traddr": "10.0.0.2", 00:20:05.828 "trsvcid": "4420" 00:20:05.828 }, 00:20:05.828 "peer_address": { 00:20:05.828 "trtype": "TCP", 00:20:05.828 "adrfam": "IPv4", 00:20:05.828 "traddr": "10.0.0.1", 00:20:05.828 "trsvcid": "59532" 00:20:05.828 }, 00:20:05.828 "auth": { 00:20:05.828 "state": "completed", 00:20:05.828 "digest": "sha256", 00:20:05.828 "dhgroup": "ffdhe8192" 00:20:05.828 } 00:20:05.828 } 00:20:05.828 ]' 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.828 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.087 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:06.087 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:06.654 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.654 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:06.654 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.654 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.654 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.654 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.654 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.654 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.913 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:06.913 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.913 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.913 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:06.913 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:06.913 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.913 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.913 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.913 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.913 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.913 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.913 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.913 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.480 00:20:07.481 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.481 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.481 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.481 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.481 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.481 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.481 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.481 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.481 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.481 { 00:20:07.481 "cntlid": 45, 00:20:07.481 "qid": 0, 00:20:07.481 "state": "enabled", 00:20:07.481 "thread": "nvmf_tgt_poll_group_000", 00:20:07.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:07.481 "listen_address": { 00:20:07.481 "trtype": "TCP", 00:20:07.481 "adrfam": "IPv4", 00:20:07.481 "traddr": "10.0.0.2", 00:20:07.481 "trsvcid": "4420" 00:20:07.481 }, 00:20:07.481 "peer_address": { 00:20:07.481 "trtype": "TCP", 00:20:07.481 "adrfam": "IPv4", 00:20:07.481 "traddr": "10.0.0.1", 00:20:07.481 "trsvcid": "36938" 00:20:07.481 }, 00:20:07.481 "auth": { 00:20:07.481 "state": "completed", 00:20:07.481 "digest": "sha256", 00:20:07.481 "dhgroup": "ffdhe8192" 00:20:07.481 } 00:20:07.481 } 00:20:07.481 ]' 00:20:07.481 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.739 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.739 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.739 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.739 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.739 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.739 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.739 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.998 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:07.998 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.564 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.823 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.823 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:08.823 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.823 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.082 00:20:09.082 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.082 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.082 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.341 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.341 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.341 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.341 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.341 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.341 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.341 { 00:20:09.341 "cntlid": 47, 00:20:09.341 "qid": 0, 00:20:09.341 "state": "enabled", 00:20:09.341 "thread": "nvmf_tgt_poll_group_000", 00:20:09.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:09.341 "listen_address": { 00:20:09.341 "trtype": "TCP", 00:20:09.341 "adrfam": "IPv4", 00:20:09.341 "traddr": "10.0.0.2", 00:20:09.341 "trsvcid": "4420" 00:20:09.341 }, 00:20:09.341 "peer_address": { 00:20:09.341 "trtype": "TCP", 00:20:09.341 "adrfam": "IPv4", 00:20:09.341 "traddr": "10.0.0.1", 00:20:09.341 "trsvcid": "36970" 00:20:09.341 }, 00:20:09.341 "auth": { 00:20:09.341 "state": "completed", 00:20:09.341 "digest": "sha256", 00:20:09.341 "dhgroup": "ffdhe8192" 00:20:09.341 } 00:20:09.341 } 00:20:09.341 ]' 00:20:09.341 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.341 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.341 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.600 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.600 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.600 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.600 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.600 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.600 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:09.600 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:10.168 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.168 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:10.168 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.168 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.168 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.168 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:10.168 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.168 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.168 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:10.168 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:10.427 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:10.427 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.427 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.427 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:10.427 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:10.427 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.427 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.427 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.427 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.427 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.427 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.427 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.427 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.686 00:20:10.686 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.686 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.686 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.943 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.944 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.944 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.944 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.944 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.944 { 00:20:10.944 "cntlid": 49, 00:20:10.944 "qid": 0, 00:20:10.944 "state": "enabled", 00:20:10.944 "thread": "nvmf_tgt_poll_group_000", 00:20:10.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:10.944 "listen_address": { 00:20:10.944 "trtype": "TCP", 00:20:10.944 "adrfam": "IPv4", 00:20:10.944 "traddr": "10.0.0.2", 00:20:10.944 "trsvcid": "4420" 00:20:10.944 }, 00:20:10.944 "peer_address": { 00:20:10.944 "trtype": "TCP", 00:20:10.944 "adrfam": "IPv4", 00:20:10.944 "traddr": "10.0.0.1", 00:20:10.944 "trsvcid": "36998" 00:20:10.944 }, 00:20:10.944 "auth": { 00:20:10.944 "state": "completed", 00:20:10.944 "digest": "sha384", 00:20:10.944 "dhgroup": "null" 00:20:10.944 } 00:20:10.944 } 00:20:10.944 ]' 00:20:10.944 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.944 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.944 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.944 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:10.944 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.202 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.202 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.202 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.202 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:11.202 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:11.770 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.770 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:11.770 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.770 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.770 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.770 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.770 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.770 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.029 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:12.029 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.029 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.029 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:12.029 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:12.029 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.029 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.029 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.029 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.029 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.029 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.029 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.029 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.287 00:20:12.287 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.287 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.287 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.546 { 00:20:12.546 "cntlid": 51, 00:20:12.546 "qid": 0, 00:20:12.546 "state": "enabled", 00:20:12.546 "thread": "nvmf_tgt_poll_group_000", 00:20:12.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:12.546 "listen_address": { 00:20:12.546 "trtype": "TCP", 00:20:12.546 "adrfam": "IPv4", 00:20:12.546 "traddr": "10.0.0.2", 00:20:12.546 "trsvcid": "4420" 00:20:12.546 }, 00:20:12.546 "peer_address": { 00:20:12.546 "trtype": "TCP", 00:20:12.546 "adrfam": "IPv4", 00:20:12.546 "traddr": "10.0.0.1", 00:20:12.546 "trsvcid": "37018" 00:20:12.546 }, 00:20:12.546 "auth": { 00:20:12.546 "state": "completed", 00:20:12.546 "digest": "sha384", 00:20:12.546 "dhgroup": "null" 00:20:12.546 } 00:20:12.546 } 00:20:12.546 ]' 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.546 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.805 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:12.805 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:13.372 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.372 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:13.372 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.372 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.372 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.372 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.372 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:13.372 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:13.631 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:13.631 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.631 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.631 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:13.631 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:13.631 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.631 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.631 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.631 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.631 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.631 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.631 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.631 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.890 00:20:13.890 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.890 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.890 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.149 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.149 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.149 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.149 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.149 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.149 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.149 { 00:20:14.149 "cntlid": 53, 00:20:14.149 "qid": 0, 00:20:14.149 "state": "enabled", 00:20:14.149 "thread": "nvmf_tgt_poll_group_000", 00:20:14.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:14.149 "listen_address": { 00:20:14.149 "trtype": "TCP", 00:20:14.149 "adrfam": "IPv4", 00:20:14.149 "traddr": "10.0.0.2", 00:20:14.149 "trsvcid": "4420" 00:20:14.149 }, 00:20:14.149 "peer_address": { 00:20:14.149 "trtype": "TCP", 00:20:14.149 "adrfam": "IPv4", 00:20:14.149 "traddr": "10.0.0.1", 00:20:14.149 "trsvcid": "37034" 00:20:14.149 }, 00:20:14.149 "auth": { 00:20:14.149 "state": "completed", 00:20:14.149 "digest": "sha384", 00:20:14.149 "dhgroup": "null" 00:20:14.149 } 00:20:14.149 } 00:20:14.149 ]' 00:20:14.149 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.149 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.149 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.149 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:14.149 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.149 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.149 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.149 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.407 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:14.407 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:14.974 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.974 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:14.974 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.974 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.974 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.974 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.974 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.975 05:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:15.237 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:15.237 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.237 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:15.237 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:15.237 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:15.238 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.238 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:15.238 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.238 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.238 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.238 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:15.238 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:15.238 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:15.499 00:20:15.499 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.499 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.499 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.499 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.499 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.499 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.499 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.499 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.499 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.499 { 00:20:15.499 "cntlid": 55, 00:20:15.499 "qid": 0, 00:20:15.499 "state": "enabled", 00:20:15.499 "thread": "nvmf_tgt_poll_group_000", 00:20:15.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:15.499 "listen_address": { 00:20:15.499 "trtype": "TCP", 00:20:15.499 "adrfam": "IPv4", 00:20:15.499 "traddr": "10.0.0.2", 00:20:15.499 "trsvcid": "4420" 00:20:15.499 }, 00:20:15.499 "peer_address": { 00:20:15.499 "trtype": "TCP", 00:20:15.500 "adrfam": "IPv4", 00:20:15.500 "traddr": "10.0.0.1", 00:20:15.500 "trsvcid": "37066" 00:20:15.500 }, 00:20:15.500 "auth": { 00:20:15.500 "state": "completed", 00:20:15.500 "digest": "sha384", 00:20:15.500 "dhgroup": "null" 00:20:15.500 } 00:20:15.500 } 00:20:15.500 ]' 00:20:15.500 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.759 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.759 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.759 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:15.759 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.759 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.759 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.759 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.017 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:16.017 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.584 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.843 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.843 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.843 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.843 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.843 00:20:16.843 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.843 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.843 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.102 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.102 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.102 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.102 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.102 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.102 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.102 { 00:20:17.102 "cntlid": 57, 00:20:17.102 "qid": 0, 00:20:17.102 "state": "enabled", 00:20:17.102 "thread": "nvmf_tgt_poll_group_000", 00:20:17.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:17.102 "listen_address": { 00:20:17.102 "trtype": "TCP", 00:20:17.102 "adrfam": "IPv4", 00:20:17.102 "traddr": "10.0.0.2", 00:20:17.102 "trsvcid": "4420" 00:20:17.102 }, 00:20:17.102 "peer_address": { 00:20:17.102 "trtype": "TCP", 00:20:17.102 "adrfam": "IPv4", 00:20:17.102 "traddr": "10.0.0.1", 00:20:17.102 "trsvcid": "49574" 00:20:17.102 }, 00:20:17.102 "auth": { 00:20:17.102 "state": "completed", 00:20:17.102 "digest": "sha384", 00:20:17.102 "dhgroup": "ffdhe2048" 00:20:17.102 } 00:20:17.102 } 00:20:17.102 ]' 00:20:17.102 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.102 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.102 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.361 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:17.361 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.361 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.361 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.361 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.619 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:17.619 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:18.186 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.186 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:18.186 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.186 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.186 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.186 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.186 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.186 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.186 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:18.186 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.186 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.186 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:18.186 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:18.186 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.186 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.186 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.186 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.186 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.186 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.186 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.186 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.445 00:20:18.445 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.445 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.445 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.704 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.704 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.704 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.704 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.704 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.704 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.704 { 00:20:18.704 "cntlid": 59, 00:20:18.704 "qid": 0, 00:20:18.704 "state": "enabled", 00:20:18.704 "thread": "nvmf_tgt_poll_group_000", 00:20:18.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:18.704 "listen_address": { 00:20:18.704 "trtype": "TCP", 00:20:18.704 "adrfam": "IPv4", 00:20:18.704 "traddr": "10.0.0.2", 00:20:18.704 "trsvcid": "4420" 00:20:18.704 }, 00:20:18.704 "peer_address": { 00:20:18.704 "trtype": "TCP", 00:20:18.704 "adrfam": "IPv4", 00:20:18.704 "traddr": "10.0.0.1", 00:20:18.704 "trsvcid": "49600" 00:20:18.704 }, 00:20:18.704 "auth": { 00:20:18.704 "state": "completed", 00:20:18.704 "digest": "sha384", 00:20:18.704 "dhgroup": "ffdhe2048" 00:20:18.704 } 00:20:18.704 } 00:20:18.704 ]' 00:20:18.704 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.704 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.704 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.963 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.963 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.963 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.963 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.963 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.963 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:18.963 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:19.530 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.789 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.048 00:20:20.048 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.048 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.048 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.307 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.307 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.307 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.307 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.307 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.307 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.307 { 00:20:20.307 "cntlid": 61, 00:20:20.307 "qid": 0, 00:20:20.307 "state": "enabled", 00:20:20.307 "thread": "nvmf_tgt_poll_group_000", 00:20:20.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:20.307 "listen_address": { 00:20:20.307 "trtype": "TCP", 00:20:20.307 "adrfam": "IPv4", 00:20:20.307 "traddr": "10.0.0.2", 00:20:20.307 "trsvcid": "4420" 00:20:20.307 }, 00:20:20.307 "peer_address": { 00:20:20.307 "trtype": "TCP", 00:20:20.307 "adrfam": "IPv4", 00:20:20.307 "traddr": "10.0.0.1", 00:20:20.307 "trsvcid": "49624" 00:20:20.307 }, 00:20:20.307 "auth": { 00:20:20.307 "state": "completed", 00:20:20.307 "digest": "sha384", 00:20:20.307 "dhgroup": "ffdhe2048" 00:20:20.307 } 00:20:20.307 } 00:20:20.307 ]' 00:20:20.307 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.307 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.307 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.307 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.307 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.565 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.565 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.565 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.565 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:20.566 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:21.133 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.133 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:21.133 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.133 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.133 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.392 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.651 00:20:21.651 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.651 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.651 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.910 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.910 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.910 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.910 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.910 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.910 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.910 { 00:20:21.910 "cntlid": 63, 00:20:21.910 "qid": 0, 00:20:21.910 "state": "enabled", 00:20:21.910 "thread": "nvmf_tgt_poll_group_000", 00:20:21.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.910 "listen_address": { 00:20:21.910 "trtype": "TCP", 00:20:21.910 "adrfam": "IPv4", 00:20:21.910 "traddr": "10.0.0.2", 00:20:21.910 "trsvcid": "4420" 00:20:21.910 }, 00:20:21.910 "peer_address": { 00:20:21.910 "trtype": "TCP", 00:20:21.910 "adrfam": "IPv4", 00:20:21.910 "traddr": "10.0.0.1", 00:20:21.910 "trsvcid": "49658" 00:20:21.910 }, 00:20:21.910 "auth": { 00:20:21.910 "state": "completed", 00:20:21.910 "digest": "sha384", 00:20:21.910 "dhgroup": "ffdhe2048" 00:20:21.910 } 00:20:21.910 } 00:20:21.910 ]' 00:20:21.910 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.910 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.910 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.910 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.910 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.169 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.169 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.169 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.169 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:22.169 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:22.735 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.735 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.735 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.735 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.735 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.735 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.735 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.735 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.735 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.994 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:22.994 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.994 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.994 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:22.994 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.994 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.994 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.994 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.994 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.994 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.994 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.994 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.994 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.253 00:20:23.253 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.253 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.253 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.511 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.511 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.511 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.511 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.511 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.511 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.511 { 00:20:23.511 "cntlid": 65, 00:20:23.511 "qid": 0, 00:20:23.511 "state": "enabled", 00:20:23.511 "thread": "nvmf_tgt_poll_group_000", 00:20:23.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:23.511 "listen_address": { 00:20:23.511 "trtype": "TCP", 00:20:23.511 "adrfam": "IPv4", 00:20:23.511 "traddr": "10.0.0.2", 00:20:23.511 "trsvcid": "4420" 00:20:23.511 }, 00:20:23.511 "peer_address": { 00:20:23.511 "trtype": "TCP", 00:20:23.511 "adrfam": "IPv4", 00:20:23.511 "traddr": "10.0.0.1", 00:20:23.511 "trsvcid": "49686" 00:20:23.511 }, 00:20:23.511 "auth": { 00:20:23.511 "state": "completed", 00:20:23.511 "digest": "sha384", 00:20:23.511 "dhgroup": "ffdhe3072" 00:20:23.511 } 00:20:23.511 } 00:20:23.511 ]' 00:20:23.511 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.511 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.511 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.511 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:23.511 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.769 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.769 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.769 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.769 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:23.769 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:24.335 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.335 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:24.335 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.335 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.335 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.335 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.335 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.335 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.594 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:24.594 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.594 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.594 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:24.594 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:24.594 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.594 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.594 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.594 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.594 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.594 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.594 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.594 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.852 00:20:24.852 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.852 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.852 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.111 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.111 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.111 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.111 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.111 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.111 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.111 { 00:20:25.111 "cntlid": 67, 00:20:25.111 "qid": 0, 00:20:25.111 "state": "enabled", 00:20:25.111 "thread": "nvmf_tgt_poll_group_000", 00:20:25.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:25.111 "listen_address": { 00:20:25.111 "trtype": "TCP", 00:20:25.111 "adrfam": "IPv4", 00:20:25.111 "traddr": "10.0.0.2", 00:20:25.111 "trsvcid": "4420" 00:20:25.111 }, 00:20:25.111 "peer_address": { 00:20:25.111 "trtype": "TCP", 00:20:25.111 "adrfam": "IPv4", 00:20:25.111 "traddr": "10.0.0.1", 00:20:25.111 "trsvcid": "49710" 00:20:25.111 }, 00:20:25.111 "auth": { 00:20:25.111 "state": "completed", 00:20:25.111 "digest": "sha384", 00:20:25.111 "dhgroup": "ffdhe3072" 00:20:25.111 } 00:20:25.111 } 00:20:25.111 ]' 00:20:25.111 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.111 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.111 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.111 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:25.111 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.111 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.111 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.111 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.370 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:25.370 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:25.937 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.938 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.938 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.938 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.938 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.938 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.938 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.938 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.197 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:26.197 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.197 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.197 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:26.197 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:26.197 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.197 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.197 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.197 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.197 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.197 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.197 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.197 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.455 00:20:26.455 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.455 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.455 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.714 { 00:20:26.714 "cntlid": 69, 00:20:26.714 "qid": 0, 00:20:26.714 "state": "enabled", 00:20:26.714 "thread": "nvmf_tgt_poll_group_000", 00:20:26.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:26.714 "listen_address": { 00:20:26.714 "trtype": "TCP", 00:20:26.714 "adrfam": "IPv4", 00:20:26.714 "traddr": "10.0.0.2", 00:20:26.714 "trsvcid": "4420" 00:20:26.714 }, 00:20:26.714 "peer_address": { 00:20:26.714 "trtype": "TCP", 00:20:26.714 "adrfam": "IPv4", 00:20:26.714 "traddr": "10.0.0.1", 00:20:26.714 "trsvcid": "49730" 00:20:26.714 }, 00:20:26.714 "auth": { 00:20:26.714 "state": "completed", 00:20:26.714 "digest": "sha384", 00:20:26.714 "dhgroup": "ffdhe3072" 00:20:26.714 } 00:20:26.714 } 00:20:26.714 ]' 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.714 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.973 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:26.973 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:27.541 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.541 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:27.541 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.541 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.541 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.541 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.541 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.541 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.800 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:27.800 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.800 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.800 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:27.800 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.800 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.800 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:27.800 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.800 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.800 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.800 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.800 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.800 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.059 00:20:28.059 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.059 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.059 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.317 { 00:20:28.317 "cntlid": 71, 00:20:28.317 "qid": 0, 00:20:28.317 "state": "enabled", 00:20:28.317 "thread": "nvmf_tgt_poll_group_000", 00:20:28.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:28.317 "listen_address": { 00:20:28.317 "trtype": "TCP", 00:20:28.317 "adrfam": "IPv4", 00:20:28.317 "traddr": "10.0.0.2", 00:20:28.317 "trsvcid": "4420" 00:20:28.317 }, 00:20:28.317 "peer_address": { 00:20:28.317 "trtype": "TCP", 00:20:28.317 "adrfam": "IPv4", 00:20:28.317 "traddr": "10.0.0.1", 00:20:28.317 "trsvcid": "46138" 00:20:28.317 }, 00:20:28.317 "auth": { 00:20:28.317 "state": "completed", 00:20:28.317 "digest": "sha384", 00:20:28.317 "dhgroup": "ffdhe3072" 00:20:28.317 } 00:20:28.317 } 00:20:28.317 ]' 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.317 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.576 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:28.576 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:29.143 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.143 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:29.143 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.143 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.143 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.143 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.143 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.143 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.143 05:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.402 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:29.402 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.402 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.402 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:29.402 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:29.402 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.402 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.402 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.402 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.402 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.402 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.402 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.402 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.661 00:20:29.661 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.661 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.661 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.661 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.661 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.661 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.661 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.920 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.920 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.920 { 00:20:29.920 "cntlid": 73, 00:20:29.920 "qid": 0, 00:20:29.920 "state": "enabled", 00:20:29.920 "thread": "nvmf_tgt_poll_group_000", 00:20:29.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:29.920 "listen_address": { 00:20:29.920 "trtype": "TCP", 00:20:29.920 "adrfam": "IPv4", 00:20:29.920 "traddr": "10.0.0.2", 00:20:29.920 "trsvcid": "4420" 00:20:29.920 }, 00:20:29.920 "peer_address": { 00:20:29.920 "trtype": "TCP", 00:20:29.920 "adrfam": "IPv4", 00:20:29.920 "traddr": "10.0.0.1", 00:20:29.920 "trsvcid": "46170" 00:20:29.920 }, 00:20:29.920 "auth": { 00:20:29.920 "state": "completed", 00:20:29.920 "digest": "sha384", 00:20:29.920 "dhgroup": "ffdhe4096" 00:20:29.920 } 00:20:29.920 } 00:20:29.920 ]' 00:20:29.920 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.920 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.920 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.920 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:29.920 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.920 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.920 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.920 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.179 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:30.179 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:30.746 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.746 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.746 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.746 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.746 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.746 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.747 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.747 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.005 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:31.005 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.005 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.005 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:31.005 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:31.005 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.005 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.005 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.005 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.005 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.005 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.005 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.005 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.264 00:20:31.264 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.264 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.264 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.523 { 00:20:31.523 "cntlid": 75, 00:20:31.523 "qid": 0, 00:20:31.523 "state": "enabled", 00:20:31.523 "thread": "nvmf_tgt_poll_group_000", 00:20:31.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:31.523 "listen_address": { 00:20:31.523 "trtype": "TCP", 00:20:31.523 "adrfam": "IPv4", 00:20:31.523 "traddr": "10.0.0.2", 00:20:31.523 "trsvcid": "4420" 00:20:31.523 }, 00:20:31.523 "peer_address": { 00:20:31.523 "trtype": "TCP", 00:20:31.523 "adrfam": "IPv4", 00:20:31.523 "traddr": "10.0.0.1", 00:20:31.523 "trsvcid": "46194" 00:20:31.523 }, 00:20:31.523 "auth": { 00:20:31.523 "state": "completed", 00:20:31.523 "digest": "sha384", 00:20:31.523 "dhgroup": "ffdhe4096" 00:20:31.523 } 00:20:31.523 } 00:20:31.523 ]' 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.523 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.782 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:31.782 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:32.349 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.349 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:32.349 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.349 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.349 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.349 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.349 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:32.349 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:32.608 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:32.608 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.608 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.608 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:32.608 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.608 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.608 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.608 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.608 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.608 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.608 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.608 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.608 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.867 00:20:32.867 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.867 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.867 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.125 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.125 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.125 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.125 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.125 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.125 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.125 { 00:20:33.125 "cntlid": 77, 00:20:33.125 "qid": 0, 00:20:33.125 "state": "enabled", 00:20:33.125 "thread": "nvmf_tgt_poll_group_000", 00:20:33.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:33.126 "listen_address": { 00:20:33.126 "trtype": "TCP", 00:20:33.126 "adrfam": "IPv4", 00:20:33.126 "traddr": "10.0.0.2", 00:20:33.126 "trsvcid": "4420" 00:20:33.126 }, 00:20:33.126 "peer_address": { 00:20:33.126 "trtype": "TCP", 00:20:33.126 "adrfam": "IPv4", 00:20:33.126 "traddr": "10.0.0.1", 00:20:33.126 "trsvcid": "46230" 00:20:33.126 }, 00:20:33.126 "auth": { 00:20:33.126 "state": "completed", 00:20:33.126 "digest": "sha384", 00:20:33.126 "dhgroup": "ffdhe4096" 00:20:33.126 } 00:20:33.126 } 00:20:33.126 ]' 00:20:33.126 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.126 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.126 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.126 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.126 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.126 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.126 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.126 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.384 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:33.384 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:33.952 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.952 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.952 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.952 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.952 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.952 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.952 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.952 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.210 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:34.210 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.210 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.210 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:34.210 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:34.210 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.210 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:34.210 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.210 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.210 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.210 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:34.210 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.210 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.469 00:20:34.469 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.469 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.469 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.728 { 00:20:34.728 "cntlid": 79, 00:20:34.728 "qid": 0, 00:20:34.728 "state": "enabled", 00:20:34.728 "thread": "nvmf_tgt_poll_group_000", 00:20:34.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.728 "listen_address": { 00:20:34.728 "trtype": "TCP", 00:20:34.728 "adrfam": "IPv4", 00:20:34.728 "traddr": "10.0.0.2", 00:20:34.728 "trsvcid": "4420" 00:20:34.728 }, 00:20:34.728 "peer_address": { 00:20:34.728 "trtype": "TCP", 00:20:34.728 "adrfam": "IPv4", 00:20:34.728 "traddr": "10.0.0.1", 00:20:34.728 "trsvcid": "46272" 00:20:34.728 }, 00:20:34.728 "auth": { 00:20:34.728 "state": "completed", 00:20:34.728 "digest": "sha384", 00:20:34.728 "dhgroup": "ffdhe4096" 00:20:34.728 } 00:20:34.728 } 00:20:34.728 ]' 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.728 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.986 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:34.986 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:35.556 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.556 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:35.556 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.556 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.556 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.556 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.556 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.556 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.556 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.814 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:35.814 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.814 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.814 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:35.814 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:35.814 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.814 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.814 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.814 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.814 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.814 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.814 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.814 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.073 00:20:36.073 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.073 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.073 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.332 { 00:20:36.332 "cntlid": 81, 00:20:36.332 "qid": 0, 00:20:36.332 "state": "enabled", 00:20:36.332 "thread": "nvmf_tgt_poll_group_000", 00:20:36.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:36.332 "listen_address": { 00:20:36.332 "trtype": "TCP", 00:20:36.332 "adrfam": "IPv4", 00:20:36.332 "traddr": "10.0.0.2", 00:20:36.332 "trsvcid": "4420" 00:20:36.332 }, 00:20:36.332 "peer_address": { 00:20:36.332 "trtype": "TCP", 00:20:36.332 "adrfam": "IPv4", 00:20:36.332 "traddr": "10.0.0.1", 00:20:36.332 "trsvcid": "46306" 00:20:36.332 }, 00:20:36.332 "auth": { 00:20:36.332 "state": "completed", 00:20:36.332 "digest": "sha384", 00:20:36.332 "dhgroup": "ffdhe6144" 00:20:36.332 } 00:20:36.332 } 00:20:36.332 ]' 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.332 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.590 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:36.590 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:37.158 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.158 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:37.158 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.158 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.158 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.158 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.158 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.158 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.416 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:37.416 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.416 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.417 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:37.417 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:37.417 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.417 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.417 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.417 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.417 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.417 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.417 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.417 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.675 00:20:37.675 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.675 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.675 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.934 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.934 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.934 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.934 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.934 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.934 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.934 { 00:20:37.934 "cntlid": 83, 00:20:37.934 "qid": 0, 00:20:37.934 "state": "enabled", 00:20:37.934 "thread": "nvmf_tgt_poll_group_000", 00:20:37.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:37.934 "listen_address": { 00:20:37.934 "trtype": "TCP", 00:20:37.934 "adrfam": "IPv4", 00:20:37.934 "traddr": "10.0.0.2", 00:20:37.934 "trsvcid": "4420" 00:20:37.934 }, 00:20:37.934 "peer_address": { 00:20:37.934 "trtype": "TCP", 00:20:37.934 "adrfam": "IPv4", 00:20:37.934 "traddr": "10.0.0.1", 00:20:37.934 "trsvcid": "59686" 00:20:37.934 }, 00:20:37.934 "auth": { 00:20:37.934 "state": "completed", 00:20:37.934 "digest": "sha384", 00:20:37.934 "dhgroup": "ffdhe6144" 00:20:37.934 } 00:20:37.934 } 00:20:37.934 ]' 00:20:37.934 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.934 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.934 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.193 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:38.193 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.193 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.193 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.193 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.193 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:38.193 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:38.761 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.761 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:38.761 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.761 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.761 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.761 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.761 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.761 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.020 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:39.020 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.020 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.020 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:39.020 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:39.020 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.020 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.020 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.020 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.020 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.020 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.020 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.020 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.279 00:20:39.538 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.538 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.538 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.538 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.538 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.538 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.538 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.538 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.538 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.538 { 00:20:39.538 "cntlid": 85, 00:20:39.538 "qid": 0, 00:20:39.538 "state": "enabled", 00:20:39.538 "thread": "nvmf_tgt_poll_group_000", 00:20:39.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:39.538 "listen_address": { 00:20:39.538 "trtype": "TCP", 00:20:39.538 "adrfam": "IPv4", 00:20:39.538 "traddr": "10.0.0.2", 00:20:39.538 "trsvcid": "4420" 00:20:39.538 }, 00:20:39.538 "peer_address": { 00:20:39.538 "trtype": "TCP", 00:20:39.538 "adrfam": "IPv4", 00:20:39.538 "traddr": "10.0.0.1", 00:20:39.538 "trsvcid": "59712" 00:20:39.538 }, 00:20:39.538 "auth": { 00:20:39.538 "state": "completed", 00:20:39.538 "digest": "sha384", 00:20:39.538 "dhgroup": "ffdhe6144" 00:20:39.538 } 00:20:39.538 } 00:20:39.538 ]' 00:20:39.538 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.797 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.797 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.797 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.797 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.797 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.797 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.797 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.056 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:40.056 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.623 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.882 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.882 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:40.882 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.882 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.141 00:20:41.141 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.141 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.141 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.400 { 00:20:41.400 "cntlid": 87, 00:20:41.400 "qid": 0, 00:20:41.400 "state": "enabled", 00:20:41.400 "thread": "nvmf_tgt_poll_group_000", 00:20:41.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:41.400 "listen_address": { 00:20:41.400 "trtype": "TCP", 00:20:41.400 "adrfam": "IPv4", 00:20:41.400 "traddr": "10.0.0.2", 00:20:41.400 "trsvcid": "4420" 00:20:41.400 }, 00:20:41.400 "peer_address": { 00:20:41.400 "trtype": "TCP", 00:20:41.400 "adrfam": "IPv4", 00:20:41.400 "traddr": "10.0.0.1", 00:20:41.400 "trsvcid": "59734" 00:20:41.400 }, 00:20:41.400 "auth": { 00:20:41.400 "state": "completed", 00:20:41.400 "digest": "sha384", 00:20:41.400 "dhgroup": "ffdhe6144" 00:20:41.400 } 00:20:41.400 } 00:20:41.400 ]' 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.400 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.659 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:41.659 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:42.226 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.226 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:42.226 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.226 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.226 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.226 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.226 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.226 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.226 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.484 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:42.484 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.484 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.484 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:42.484 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:42.484 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.484 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.484 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.484 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.484 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.484 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.484 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.484 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.051 00:20:43.051 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.051 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.051 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.051 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.051 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.051 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.051 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.051 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.051 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.051 { 00:20:43.051 "cntlid": 89, 00:20:43.051 "qid": 0, 00:20:43.051 "state": "enabled", 00:20:43.051 "thread": "nvmf_tgt_poll_group_000", 00:20:43.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:43.051 "listen_address": { 00:20:43.051 "trtype": "TCP", 00:20:43.051 "adrfam": "IPv4", 00:20:43.051 "traddr": "10.0.0.2", 00:20:43.051 "trsvcid": "4420" 00:20:43.051 }, 00:20:43.051 "peer_address": { 00:20:43.051 "trtype": "TCP", 00:20:43.051 "adrfam": "IPv4", 00:20:43.051 "traddr": "10.0.0.1", 00:20:43.051 "trsvcid": "59752" 00:20:43.051 }, 00:20:43.051 "auth": { 00:20:43.051 "state": "completed", 00:20:43.051 "digest": "sha384", 00:20:43.051 "dhgroup": "ffdhe8192" 00:20:43.051 } 00:20:43.051 } 00:20:43.051 ]' 00:20:43.051 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.310 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.310 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.310 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:43.310 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.310 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.310 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.310 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.569 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:43.569 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:44.136 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.136 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:44.136 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.136 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.136 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.136 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.136 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:44.136 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:44.136 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:44.136 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.136 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.136 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:44.136 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:44.136 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.136 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.136 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.136 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.136 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.136 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.136 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.136 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.704 00:20:44.704 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.704 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.704 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.963 { 00:20:44.963 "cntlid": 91, 00:20:44.963 "qid": 0, 00:20:44.963 "state": "enabled", 00:20:44.963 "thread": "nvmf_tgt_poll_group_000", 00:20:44.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.963 "listen_address": { 00:20:44.963 "trtype": "TCP", 00:20:44.963 "adrfam": "IPv4", 00:20:44.963 "traddr": "10.0.0.2", 00:20:44.963 "trsvcid": "4420" 00:20:44.963 }, 00:20:44.963 "peer_address": { 00:20:44.963 "trtype": "TCP", 00:20:44.963 "adrfam": "IPv4", 00:20:44.963 "traddr": "10.0.0.1", 00:20:44.963 "trsvcid": "59760" 00:20:44.963 }, 00:20:44.963 "auth": { 00:20:44.963 "state": "completed", 00:20:44.963 "digest": "sha384", 00:20:44.963 "dhgroup": "ffdhe8192" 00:20:44.963 } 00:20:44.963 } 00:20:44.963 ]' 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.963 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.223 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:45.223 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:45.788 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.788 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.788 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.788 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.788 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.788 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.789 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:45.789 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:46.047 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:46.047 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.047 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.047 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:46.047 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:46.047 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.047 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.047 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.047 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.048 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.048 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.048 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.048 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.627 00:20:46.627 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.627 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.627 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.627 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.627 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.627 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.627 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.886 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.886 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.886 { 00:20:46.886 "cntlid": 93, 00:20:46.886 "qid": 0, 00:20:46.886 "state": "enabled", 00:20:46.886 "thread": "nvmf_tgt_poll_group_000", 00:20:46.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:46.886 "listen_address": { 00:20:46.886 "trtype": "TCP", 00:20:46.886 "adrfam": "IPv4", 00:20:46.886 "traddr": "10.0.0.2", 00:20:46.886 "trsvcid": "4420" 00:20:46.886 }, 00:20:46.886 "peer_address": { 00:20:46.886 "trtype": "TCP", 00:20:46.886 "adrfam": "IPv4", 00:20:46.886 "traddr": "10.0.0.1", 00:20:46.886 "trsvcid": "59772" 00:20:46.886 }, 00:20:46.886 "auth": { 00:20:46.886 "state": "completed", 00:20:46.887 "digest": "sha384", 00:20:46.887 "dhgroup": "ffdhe8192" 00:20:46.887 } 00:20:46.887 } 00:20:46.887 ]' 00:20:46.887 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.887 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.887 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.887 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.887 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.887 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.887 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.887 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.146 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:47.146 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:47.714 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.714 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:47.714 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.714 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.714 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.714 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.714 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.714 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.974 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:47.974 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.974 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.974 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:47.974 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:47.974 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.974 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:47.974 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.974 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.974 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.974 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:47.974 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.974 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.233 00:20:48.492 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.492 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.492 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.492 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.492 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.492 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.492 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.492 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.492 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.492 { 00:20:48.492 "cntlid": 95, 00:20:48.492 "qid": 0, 00:20:48.492 "state": "enabled", 00:20:48.492 "thread": "nvmf_tgt_poll_group_000", 00:20:48.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:48.492 "listen_address": { 00:20:48.492 "trtype": "TCP", 00:20:48.492 "adrfam": "IPv4", 00:20:48.492 "traddr": "10.0.0.2", 00:20:48.492 "trsvcid": "4420" 00:20:48.492 }, 00:20:48.492 "peer_address": { 00:20:48.492 "trtype": "TCP", 00:20:48.492 "adrfam": "IPv4", 00:20:48.492 "traddr": "10.0.0.1", 00:20:48.492 "trsvcid": "56840" 00:20:48.492 }, 00:20:48.492 "auth": { 00:20:48.492 "state": "completed", 00:20:48.492 "digest": "sha384", 00:20:48.492 "dhgroup": "ffdhe8192" 00:20:48.492 } 00:20:48.492 } 00:20:48.492 ]' 00:20:48.492 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.492 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.492 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.751 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.752 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.752 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.752 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.752 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.011 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:49.011 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.579 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.838 00:20:49.838 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.838 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.838 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.098 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.098 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.098 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.098 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.098 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.098 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.098 { 00:20:50.098 "cntlid": 97, 00:20:50.098 "qid": 0, 00:20:50.098 "state": "enabled", 00:20:50.098 "thread": "nvmf_tgt_poll_group_000", 00:20:50.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:50.098 "listen_address": { 00:20:50.098 "trtype": "TCP", 00:20:50.098 "adrfam": "IPv4", 00:20:50.098 "traddr": "10.0.0.2", 00:20:50.098 "trsvcid": "4420" 00:20:50.098 }, 00:20:50.098 "peer_address": { 00:20:50.098 "trtype": "TCP", 00:20:50.098 "adrfam": "IPv4", 00:20:50.098 "traddr": "10.0.0.1", 00:20:50.098 "trsvcid": "56880" 00:20:50.098 }, 00:20:50.098 "auth": { 00:20:50.098 "state": "completed", 00:20:50.098 "digest": "sha512", 00:20:50.098 "dhgroup": "null" 00:20:50.098 } 00:20:50.098 } 00:20:50.098 ]' 00:20:50.098 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.098 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.098 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.357 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:50.357 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.357 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.357 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.357 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.616 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:50.616 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:51.184 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.184 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:51.184 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.184 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.184 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.184 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.184 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:51.184 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:51.184 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:51.184 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.184 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:51.184 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:51.184 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:51.184 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.184 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.184 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.184 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.184 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.184 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.185 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.185 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.444 00:20:51.444 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.444 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.444 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.703 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.703 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.703 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.703 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.703 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.703 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.703 { 00:20:51.703 "cntlid": 99, 00:20:51.703 "qid": 0, 00:20:51.703 "state": "enabled", 00:20:51.703 "thread": "nvmf_tgt_poll_group_000", 00:20:51.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:51.703 "listen_address": { 00:20:51.703 "trtype": "TCP", 00:20:51.703 "adrfam": "IPv4", 00:20:51.703 "traddr": "10.0.0.2", 00:20:51.703 "trsvcid": "4420" 00:20:51.703 }, 00:20:51.703 "peer_address": { 00:20:51.703 "trtype": "TCP", 00:20:51.703 "adrfam": "IPv4", 00:20:51.703 "traddr": "10.0.0.1", 00:20:51.703 "trsvcid": "56912" 00:20:51.703 }, 00:20:51.703 "auth": { 00:20:51.703 "state": "completed", 00:20:51.703 "digest": "sha512", 00:20:51.703 "dhgroup": "null" 00:20:51.703 } 00:20:51.703 } 00:20:51.703 ]' 00:20:51.703 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.703 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.703 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.962 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:51.962 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.962 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.963 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.963 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.963 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:51.963 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:52.531 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.531 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:52.531 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.531 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.531 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.531 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.531 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.531 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.790 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:52.790 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.790 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:52.790 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:52.790 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:52.790 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.790 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.790 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.790 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.790 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.790 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.790 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.790 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.050 00:20:53.050 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.050 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.050 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.309 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.309 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.309 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.309 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.309 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.309 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.309 { 00:20:53.309 "cntlid": 101, 00:20:53.309 "qid": 0, 00:20:53.309 "state": "enabled", 00:20:53.309 "thread": "nvmf_tgt_poll_group_000", 00:20:53.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:53.309 "listen_address": { 00:20:53.309 "trtype": "TCP", 00:20:53.309 "adrfam": "IPv4", 00:20:53.309 "traddr": "10.0.0.2", 00:20:53.309 "trsvcid": "4420" 00:20:53.309 }, 00:20:53.309 "peer_address": { 00:20:53.309 "trtype": "TCP", 00:20:53.309 "adrfam": "IPv4", 00:20:53.309 "traddr": "10.0.0.1", 00:20:53.309 "trsvcid": "56948" 00:20:53.309 }, 00:20:53.309 "auth": { 00:20:53.309 "state": "completed", 00:20:53.309 "digest": "sha512", 00:20:53.309 "dhgroup": "null" 00:20:53.309 } 00:20:53.309 } 00:20:53.309 ]' 00:20:53.309 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.309 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.309 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.309 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:53.309 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.568 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.568 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.568 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.568 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:53.568 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:54.137 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.137 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:54.137 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.137 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.137 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.137 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.137 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:54.137 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:54.396 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:54.396 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.396 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:54.396 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:54.396 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:54.396 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.396 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:54.396 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.396 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.396 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.396 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:54.396 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.396 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.655 00:20:54.655 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.655 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.655 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.914 { 00:20:54.914 "cntlid": 103, 00:20:54.914 "qid": 0, 00:20:54.914 "state": "enabled", 00:20:54.914 "thread": "nvmf_tgt_poll_group_000", 00:20:54.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:54.914 "listen_address": { 00:20:54.914 "trtype": "TCP", 00:20:54.914 "adrfam": "IPv4", 00:20:54.914 "traddr": "10.0.0.2", 00:20:54.914 "trsvcid": "4420" 00:20:54.914 }, 00:20:54.914 "peer_address": { 00:20:54.914 "trtype": "TCP", 00:20:54.914 "adrfam": "IPv4", 00:20:54.914 "traddr": "10.0.0.1", 00:20:54.914 "trsvcid": "56974" 00:20:54.914 }, 00:20:54.914 "auth": { 00:20:54.914 "state": "completed", 00:20:54.914 "digest": "sha512", 00:20:54.914 "dhgroup": "null" 00:20:54.914 } 00:20:54.914 } 00:20:54.914 ]' 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.914 05:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.173 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:55.173 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:20:55.740 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.740 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:55.740 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.740 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.740 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.740 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.740 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.740 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.740 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:56.000 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:56.000 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.000 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:56.000 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:56.000 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:56.000 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.000 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.000 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.000 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.000 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.000 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.000 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.000 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.259 00:20:56.259 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.259 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.259 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.518 { 00:20:56.518 "cntlid": 105, 00:20:56.518 "qid": 0, 00:20:56.518 "state": "enabled", 00:20:56.518 "thread": "nvmf_tgt_poll_group_000", 00:20:56.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:56.518 "listen_address": { 00:20:56.518 "trtype": "TCP", 00:20:56.518 "adrfam": "IPv4", 00:20:56.518 "traddr": "10.0.0.2", 00:20:56.518 "trsvcid": "4420" 00:20:56.518 }, 00:20:56.518 "peer_address": { 00:20:56.518 "trtype": "TCP", 00:20:56.518 "adrfam": "IPv4", 00:20:56.518 "traddr": "10.0.0.1", 00:20:56.518 "trsvcid": "57014" 00:20:56.518 }, 00:20:56.518 "auth": { 00:20:56.518 "state": "completed", 00:20:56.518 "digest": "sha512", 00:20:56.518 "dhgroup": "ffdhe2048" 00:20:56.518 } 00:20:56.518 } 00:20:56.518 ]' 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.518 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.778 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:56.778 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:20:57.346 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.346 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:57.346 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.346 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.346 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.346 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.346 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:57.346 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:57.605 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:57.605 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.605 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.605 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:57.605 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:57.605 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.605 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.605 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.605 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.605 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.605 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.605 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.605 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.865 00:20:57.865 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.865 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.865 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.865 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.865 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.865 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.865 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.124 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.124 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.124 { 00:20:58.124 "cntlid": 107, 00:20:58.124 "qid": 0, 00:20:58.124 "state": "enabled", 00:20:58.124 "thread": "nvmf_tgt_poll_group_000", 00:20:58.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:58.124 "listen_address": { 00:20:58.124 "trtype": "TCP", 00:20:58.124 "adrfam": "IPv4", 00:20:58.124 "traddr": "10.0.0.2", 00:20:58.124 "trsvcid": "4420" 00:20:58.124 }, 00:20:58.124 "peer_address": { 00:20:58.124 "trtype": "TCP", 00:20:58.124 "adrfam": "IPv4", 00:20:58.124 "traddr": "10.0.0.1", 00:20:58.124 "trsvcid": "46416" 00:20:58.124 }, 00:20:58.124 "auth": { 00:20:58.124 "state": "completed", 00:20:58.124 "digest": "sha512", 00:20:58.124 "dhgroup": "ffdhe2048" 00:20:58.124 } 00:20:58.124 } 00:20:58.124 ]' 00:20:58.124 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.124 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.124 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.124 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:58.124 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.124 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.124 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.125 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.384 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:58.384 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:20:58.952 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.952 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:58.952 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.952 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.952 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.952 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.952 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:58.952 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.211 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:59.211 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.211 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.211 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:59.211 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:59.211 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.211 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.211 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.211 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.211 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.211 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.211 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.211 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.470 00:20:59.470 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.470 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.470 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.470 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.470 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.471 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.471 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.471 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.471 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.471 { 00:20:59.471 "cntlid": 109, 00:20:59.471 "qid": 0, 00:20:59.471 "state": "enabled", 00:20:59.471 "thread": "nvmf_tgt_poll_group_000", 00:20:59.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:59.471 "listen_address": { 00:20:59.471 "trtype": "TCP", 00:20:59.471 "adrfam": "IPv4", 00:20:59.471 "traddr": "10.0.0.2", 00:20:59.471 "trsvcid": "4420" 00:20:59.471 }, 00:20:59.471 "peer_address": { 00:20:59.471 "trtype": "TCP", 00:20:59.471 "adrfam": "IPv4", 00:20:59.471 "traddr": "10.0.0.1", 00:20:59.471 "trsvcid": "46456" 00:20:59.471 }, 00:20:59.471 "auth": { 00:20:59.471 "state": "completed", 00:20:59.471 "digest": "sha512", 00:20:59.471 "dhgroup": "ffdhe2048" 00:20:59.471 } 00:20:59.471 } 00:20:59.471 ]' 00:20:59.471 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.729 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.729 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.729 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:59.729 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.730 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.730 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.730 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.989 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:20:59.989 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.556 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.815 00:21:00.815 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.815 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.815 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.074 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.074 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.074 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.074 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.074 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.074 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.074 { 00:21:01.074 "cntlid": 111, 00:21:01.074 "qid": 0, 00:21:01.074 "state": "enabled", 00:21:01.074 "thread": "nvmf_tgt_poll_group_000", 00:21:01.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:01.074 "listen_address": { 00:21:01.074 "trtype": "TCP", 00:21:01.074 "adrfam": "IPv4", 00:21:01.074 "traddr": "10.0.0.2", 00:21:01.074 "trsvcid": "4420" 00:21:01.074 }, 00:21:01.074 "peer_address": { 00:21:01.074 "trtype": "TCP", 00:21:01.074 "adrfam": "IPv4", 00:21:01.074 "traddr": "10.0.0.1", 00:21:01.074 "trsvcid": "46488" 00:21:01.074 }, 00:21:01.074 "auth": { 00:21:01.074 "state": "completed", 00:21:01.074 "digest": "sha512", 00:21:01.074 "dhgroup": "ffdhe2048" 00:21:01.074 } 00:21:01.074 } 00:21:01.074 ]' 00:21:01.074 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.074 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.074 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.334 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:01.334 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.334 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.334 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.334 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.334 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:01.334 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:01.901 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.901 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.901 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.901 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.161 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.161 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.161 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.161 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:02.161 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:02.161 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:02.161 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.161 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.161 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:02.161 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:02.161 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.161 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.161 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.161 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.161 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.161 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.161 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.161 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.419 00:21:02.419 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.419 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.419 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.677 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.677 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.677 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.677 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.678 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.678 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.678 { 00:21:02.678 "cntlid": 113, 00:21:02.678 "qid": 0, 00:21:02.678 "state": "enabled", 00:21:02.678 "thread": "nvmf_tgt_poll_group_000", 00:21:02.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:02.678 "listen_address": { 00:21:02.678 "trtype": "TCP", 00:21:02.678 "adrfam": "IPv4", 00:21:02.678 "traddr": "10.0.0.2", 00:21:02.678 "trsvcid": "4420" 00:21:02.678 }, 00:21:02.678 "peer_address": { 00:21:02.678 "trtype": "TCP", 00:21:02.678 "adrfam": "IPv4", 00:21:02.678 "traddr": "10.0.0.1", 00:21:02.678 "trsvcid": "46506" 00:21:02.678 }, 00:21:02.678 "auth": { 00:21:02.678 "state": "completed", 00:21:02.678 "digest": "sha512", 00:21:02.678 "dhgroup": "ffdhe3072" 00:21:02.678 } 00:21:02.678 } 00:21:02.678 ]' 00:21:02.678 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.678 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.678 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.678 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:02.678 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.936 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.936 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.936 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.936 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:21:02.936 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:21:03.503 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.503 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.503 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.503 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.503 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.503 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.503 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:03.503 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:03.761 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:03.761 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.761 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.761 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:03.761 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:03.761 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.761 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.761 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.761 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.761 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.762 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.762 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.762 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.030 00:21:04.030 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.030 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.030 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.288 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.288 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.288 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.288 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.288 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.288 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.288 { 00:21:04.288 "cntlid": 115, 00:21:04.288 "qid": 0, 00:21:04.288 "state": "enabled", 00:21:04.288 "thread": "nvmf_tgt_poll_group_000", 00:21:04.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:04.288 "listen_address": { 00:21:04.288 "trtype": "TCP", 00:21:04.288 "adrfam": "IPv4", 00:21:04.288 "traddr": "10.0.0.2", 00:21:04.288 "trsvcid": "4420" 00:21:04.288 }, 00:21:04.288 "peer_address": { 00:21:04.288 "trtype": "TCP", 00:21:04.288 "adrfam": "IPv4", 00:21:04.288 "traddr": "10.0.0.1", 00:21:04.288 "trsvcid": "46536" 00:21:04.288 }, 00:21:04.288 "auth": { 00:21:04.288 "state": "completed", 00:21:04.288 "digest": "sha512", 00:21:04.288 "dhgroup": "ffdhe3072" 00:21:04.288 } 00:21:04.288 } 00:21:04.288 ]' 00:21:04.288 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.289 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.289 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.289 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:04.289 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.289 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.289 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.289 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.547 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:21:04.547 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:21:05.114 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.114 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:05.114 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.114 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.114 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.114 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.114 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:05.114 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:05.372 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:05.372 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.372 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.372 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:05.372 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.372 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.372 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.372 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.372 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.372 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.372 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.372 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.372 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.630 00:21:05.630 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.630 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.630 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.889 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.889 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.889 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.889 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.889 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.889 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.889 { 00:21:05.889 "cntlid": 117, 00:21:05.889 "qid": 0, 00:21:05.889 "state": "enabled", 00:21:05.889 "thread": "nvmf_tgt_poll_group_000", 00:21:05.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:05.889 "listen_address": { 00:21:05.889 "trtype": "TCP", 00:21:05.889 "adrfam": "IPv4", 00:21:05.889 "traddr": "10.0.0.2", 00:21:05.889 "trsvcid": "4420" 00:21:05.889 }, 00:21:05.889 "peer_address": { 00:21:05.889 "trtype": "TCP", 00:21:05.889 "adrfam": "IPv4", 00:21:05.889 "traddr": "10.0.0.1", 00:21:05.889 "trsvcid": "46562" 00:21:05.889 }, 00:21:05.889 "auth": { 00:21:05.889 "state": "completed", 00:21:05.889 "digest": "sha512", 00:21:05.889 "dhgroup": "ffdhe3072" 00:21:05.889 } 00:21:05.889 } 00:21:05.889 ]' 00:21:05.889 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.889 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.889 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.889 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:05.889 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.148 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.148 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.148 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.148 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:21:06.148 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:21:06.714 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.714 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:06.714 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.714 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.714 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.714 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.714 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.714 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.973 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:06.973 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.973 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.973 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:06.973 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.973 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.973 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:06.973 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.973 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.973 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.973 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.973 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.973 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.231 00:21:07.231 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.231 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.231 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.490 { 00:21:07.490 "cntlid": 119, 00:21:07.490 "qid": 0, 00:21:07.490 "state": "enabled", 00:21:07.490 "thread": "nvmf_tgt_poll_group_000", 00:21:07.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.490 "listen_address": { 00:21:07.490 "trtype": "TCP", 00:21:07.490 "adrfam": "IPv4", 00:21:07.490 "traddr": "10.0.0.2", 00:21:07.490 "trsvcid": "4420" 00:21:07.490 }, 00:21:07.490 "peer_address": { 00:21:07.490 "trtype": "TCP", 00:21:07.490 "adrfam": "IPv4", 00:21:07.490 "traddr": "10.0.0.1", 00:21:07.490 "trsvcid": "42454" 00:21:07.490 }, 00:21:07.490 "auth": { 00:21:07.490 "state": "completed", 00:21:07.490 "digest": "sha512", 00:21:07.490 "dhgroup": "ffdhe3072" 00:21:07.490 } 00:21:07.490 } 00:21:07.490 ]' 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.490 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.748 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:07.748 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:08.315 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.315 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:08.315 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.315 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.315 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.315 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.315 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.315 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.315 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.573 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:08.573 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.573 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.573 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:08.573 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.573 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.573 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.573 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.573 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.573 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.573 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.573 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.573 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.832 00:21:08.832 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.832 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.832 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.090 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.090 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.090 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.090 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.090 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.090 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.090 { 00:21:09.090 "cntlid": 121, 00:21:09.090 "qid": 0, 00:21:09.090 "state": "enabled", 00:21:09.090 "thread": "nvmf_tgt_poll_group_000", 00:21:09.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:09.090 "listen_address": { 00:21:09.090 "trtype": "TCP", 00:21:09.090 "adrfam": "IPv4", 00:21:09.090 "traddr": "10.0.0.2", 00:21:09.090 "trsvcid": "4420" 00:21:09.090 }, 00:21:09.090 "peer_address": { 00:21:09.090 "trtype": "TCP", 00:21:09.090 "adrfam": "IPv4", 00:21:09.090 "traddr": "10.0.0.1", 00:21:09.090 "trsvcid": "42470" 00:21:09.090 }, 00:21:09.090 "auth": { 00:21:09.090 "state": "completed", 00:21:09.090 "digest": "sha512", 00:21:09.090 "dhgroup": "ffdhe4096" 00:21:09.090 } 00:21:09.090 } 00:21:09.090 ]' 00:21:09.090 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.090 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.090 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.090 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.090 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.090 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.090 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.090 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.348 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:21:09.348 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:21:09.915 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.915 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.915 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.915 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.915 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.915 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.915 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:09.915 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:10.173 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:10.173 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.173 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.173 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:10.173 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:10.173 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.173 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.173 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.173 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.173 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.173 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.173 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.173 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.431 00:21:10.431 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.431 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.431 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.690 { 00:21:10.690 "cntlid": 123, 00:21:10.690 "qid": 0, 00:21:10.690 "state": "enabled", 00:21:10.690 "thread": "nvmf_tgt_poll_group_000", 00:21:10.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:10.690 "listen_address": { 00:21:10.690 "trtype": "TCP", 00:21:10.690 "adrfam": "IPv4", 00:21:10.690 "traddr": "10.0.0.2", 00:21:10.690 "trsvcid": "4420" 00:21:10.690 }, 00:21:10.690 "peer_address": { 00:21:10.690 "trtype": "TCP", 00:21:10.690 "adrfam": "IPv4", 00:21:10.690 "traddr": "10.0.0.1", 00:21:10.690 "trsvcid": "42492" 00:21:10.690 }, 00:21:10.690 "auth": { 00:21:10.690 "state": "completed", 00:21:10.690 "digest": "sha512", 00:21:10.690 "dhgroup": "ffdhe4096" 00:21:10.690 } 00:21:10.690 } 00:21:10.690 ]' 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.690 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.948 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:21:10.948 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:21:11.514 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.514 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:11.514 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.514 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.514 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.514 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.514 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:11.514 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:11.773 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:11.773 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.773 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.773 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:11.773 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:11.773 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.773 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.773 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.773 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.773 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.773 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.773 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.773 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.032 00:21:12.032 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.032 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.032 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.290 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.290 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.290 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.290 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.290 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.290 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.290 { 00:21:12.290 "cntlid": 125, 00:21:12.290 "qid": 0, 00:21:12.290 "state": "enabled", 00:21:12.290 "thread": "nvmf_tgt_poll_group_000", 00:21:12.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:12.290 "listen_address": { 00:21:12.290 "trtype": "TCP", 00:21:12.290 "adrfam": "IPv4", 00:21:12.290 "traddr": "10.0.0.2", 00:21:12.290 "trsvcid": "4420" 00:21:12.290 }, 00:21:12.290 "peer_address": { 00:21:12.290 "trtype": "TCP", 00:21:12.290 "adrfam": "IPv4", 00:21:12.290 "traddr": "10.0.0.1", 00:21:12.290 "trsvcid": "42520" 00:21:12.290 }, 00:21:12.290 "auth": { 00:21:12.290 "state": "completed", 00:21:12.290 "digest": "sha512", 00:21:12.290 "dhgroup": "ffdhe4096" 00:21:12.290 } 00:21:12.290 } 00:21:12.290 ]' 00:21:12.290 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.290 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.290 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.290 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:12.290 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.291 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.291 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.291 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.549 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:21:12.549 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:21:13.115 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.115 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:13.115 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.115 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.115 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.115 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.115 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:13.115 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:13.374 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:13.374 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.374 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.374 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:13.374 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:13.374 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.374 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:13.374 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.374 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.374 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.374 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:13.374 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.374 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.632 00:21:13.632 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.632 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.632 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.891 { 00:21:13.891 "cntlid": 127, 00:21:13.891 "qid": 0, 00:21:13.891 "state": "enabled", 00:21:13.891 "thread": "nvmf_tgt_poll_group_000", 00:21:13.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.891 "listen_address": { 00:21:13.891 "trtype": "TCP", 00:21:13.891 "adrfam": "IPv4", 00:21:13.891 "traddr": "10.0.0.2", 00:21:13.891 "trsvcid": "4420" 00:21:13.891 }, 00:21:13.891 "peer_address": { 00:21:13.891 "trtype": "TCP", 00:21:13.891 "adrfam": "IPv4", 00:21:13.891 "traddr": "10.0.0.1", 00:21:13.891 "trsvcid": "42552" 00:21:13.891 }, 00:21:13.891 "auth": { 00:21:13.891 "state": "completed", 00:21:13.891 "digest": "sha512", 00:21:13.891 "dhgroup": "ffdhe4096" 00:21:13.891 } 00:21:13.891 } 00:21:13.891 ]' 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.891 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.149 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:14.149 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:14.715 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.715 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.715 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.715 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.715 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.715 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.715 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.715 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:14.715 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:14.973 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:14.973 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.973 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.973 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:14.973 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:14.973 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.973 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.973 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.973 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.973 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.973 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.973 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.973 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.231 00:21:15.231 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.231 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.231 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.489 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.489 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.489 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.489 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.489 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.489 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.489 { 00:21:15.489 "cntlid": 129, 00:21:15.489 "qid": 0, 00:21:15.489 "state": "enabled", 00:21:15.489 "thread": "nvmf_tgt_poll_group_000", 00:21:15.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:15.489 "listen_address": { 00:21:15.489 "trtype": "TCP", 00:21:15.489 "adrfam": "IPv4", 00:21:15.489 "traddr": "10.0.0.2", 00:21:15.489 "trsvcid": "4420" 00:21:15.489 }, 00:21:15.489 "peer_address": { 00:21:15.489 "trtype": "TCP", 00:21:15.489 "adrfam": "IPv4", 00:21:15.489 "traddr": "10.0.0.1", 00:21:15.489 "trsvcid": "42594" 00:21:15.489 }, 00:21:15.489 "auth": { 00:21:15.489 "state": "completed", 00:21:15.489 "digest": "sha512", 00:21:15.489 "dhgroup": "ffdhe6144" 00:21:15.489 } 00:21:15.489 } 00:21:15.489 ]' 00:21:15.489 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.489 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.489 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.746 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:15.746 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.746 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.747 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.747 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.747 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:21:15.747 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:21:16.312 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.312 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:16.312 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.312 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.312 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.312 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.312 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.312 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.570 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:16.570 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.570 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.570 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:16.570 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:16.570 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.570 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.570 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.570 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.570 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.570 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.570 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.570 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.136 00:21:17.136 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.136 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.136 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.136 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.136 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.136 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.136 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.136 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.136 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.136 { 00:21:17.136 "cntlid": 131, 00:21:17.136 "qid": 0, 00:21:17.136 "state": "enabled", 00:21:17.136 "thread": "nvmf_tgt_poll_group_000", 00:21:17.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:17.136 "listen_address": { 00:21:17.136 "trtype": "TCP", 00:21:17.136 "adrfam": "IPv4", 00:21:17.136 "traddr": "10.0.0.2", 00:21:17.136 "trsvcid": "4420" 00:21:17.136 }, 00:21:17.136 "peer_address": { 00:21:17.136 "trtype": "TCP", 00:21:17.136 "adrfam": "IPv4", 00:21:17.136 "traddr": "10.0.0.1", 00:21:17.136 "trsvcid": "44394" 00:21:17.136 }, 00:21:17.136 "auth": { 00:21:17.136 "state": "completed", 00:21:17.136 "digest": "sha512", 00:21:17.136 "dhgroup": "ffdhe6144" 00:21:17.136 } 00:21:17.136 } 00:21:17.136 ]' 00:21:17.136 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.136 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.394 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.395 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.395 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.395 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.395 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.395 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.652 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:21:17.652 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.219 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.785 00:21:18.785 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.785 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.785 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.785 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.785 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.785 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.785 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.043 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.043 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.043 { 00:21:19.043 "cntlid": 133, 00:21:19.043 "qid": 0, 00:21:19.043 "state": "enabled", 00:21:19.043 "thread": "nvmf_tgt_poll_group_000", 00:21:19.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.043 "listen_address": { 00:21:19.043 "trtype": "TCP", 00:21:19.043 "adrfam": "IPv4", 00:21:19.043 "traddr": "10.0.0.2", 00:21:19.043 "trsvcid": "4420" 00:21:19.043 }, 00:21:19.043 "peer_address": { 00:21:19.043 "trtype": "TCP", 00:21:19.043 "adrfam": "IPv4", 00:21:19.043 "traddr": "10.0.0.1", 00:21:19.043 "trsvcid": "44428" 00:21:19.043 }, 00:21:19.043 "auth": { 00:21:19.043 "state": "completed", 00:21:19.043 "digest": "sha512", 00:21:19.043 "dhgroup": "ffdhe6144" 00:21:19.043 } 00:21:19.043 } 00:21:19.043 ]' 00:21:19.043 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.043 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.043 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.043 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.043 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.043 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.043 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.043 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.302 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:21:19.302 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:21:19.869 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.869 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:19.869 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.869 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.869 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.869 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.869 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:19.869 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:20.128 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:20.128 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.128 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.128 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:20.128 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:20.128 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.128 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:20.128 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.128 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.128 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.128 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:20.128 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.128 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.387 00:21:20.387 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.387 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.387 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.646 { 00:21:20.646 "cntlid": 135, 00:21:20.646 "qid": 0, 00:21:20.646 "state": "enabled", 00:21:20.646 "thread": "nvmf_tgt_poll_group_000", 00:21:20.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:20.646 "listen_address": { 00:21:20.646 "trtype": "TCP", 00:21:20.646 "adrfam": "IPv4", 00:21:20.646 "traddr": "10.0.0.2", 00:21:20.646 "trsvcid": "4420" 00:21:20.646 }, 00:21:20.646 "peer_address": { 00:21:20.646 "trtype": "TCP", 00:21:20.646 "adrfam": "IPv4", 00:21:20.646 "traddr": "10.0.0.1", 00:21:20.646 "trsvcid": "44462" 00:21:20.646 }, 00:21:20.646 "auth": { 00:21:20.646 "state": "completed", 00:21:20.646 "digest": "sha512", 00:21:20.646 "dhgroup": "ffdhe6144" 00:21:20.646 } 00:21:20.646 } 00:21:20.646 ]' 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.646 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.904 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:20.904 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:21.471 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.471 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:21.471 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.471 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.472 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.472 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.472 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.472 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:21.472 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:21.731 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:21.731 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.731 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.731 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:21.731 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:21.731 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.731 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.731 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.731 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.731 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.731 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.731 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.731 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.299 00:21:22.299 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.299 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.299 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.299 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.299 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.299 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.299 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.299 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.299 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.299 { 00:21:22.299 "cntlid": 137, 00:21:22.299 "qid": 0, 00:21:22.299 "state": "enabled", 00:21:22.299 "thread": "nvmf_tgt_poll_group_000", 00:21:22.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:22.299 "listen_address": { 00:21:22.299 "trtype": "TCP", 00:21:22.299 "adrfam": "IPv4", 00:21:22.299 "traddr": "10.0.0.2", 00:21:22.299 "trsvcid": "4420" 00:21:22.299 }, 00:21:22.299 "peer_address": { 00:21:22.299 "trtype": "TCP", 00:21:22.299 "adrfam": "IPv4", 00:21:22.299 "traddr": "10.0.0.1", 00:21:22.299 "trsvcid": "44488" 00:21:22.299 }, 00:21:22.299 "auth": { 00:21:22.299 "state": "completed", 00:21:22.299 "digest": "sha512", 00:21:22.299 "dhgroup": "ffdhe8192" 00:21:22.299 } 00:21:22.299 } 00:21:22.299 ]' 00:21:22.299 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.558 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.558 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.558 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:22.558 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.558 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.558 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.558 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.817 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:21:22.817 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.385 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.952 00:21:23.952 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.952 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.952 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.211 { 00:21:24.211 "cntlid": 139, 00:21:24.211 "qid": 0, 00:21:24.211 "state": "enabled", 00:21:24.211 "thread": "nvmf_tgt_poll_group_000", 00:21:24.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.211 "listen_address": { 00:21:24.211 "trtype": "TCP", 00:21:24.211 "adrfam": "IPv4", 00:21:24.211 "traddr": "10.0.0.2", 00:21:24.211 "trsvcid": "4420" 00:21:24.211 }, 00:21:24.211 "peer_address": { 00:21:24.211 "trtype": "TCP", 00:21:24.211 "adrfam": "IPv4", 00:21:24.211 "traddr": "10.0.0.1", 00:21:24.211 "trsvcid": "44516" 00:21:24.211 }, 00:21:24.211 "auth": { 00:21:24.211 "state": "completed", 00:21:24.211 "digest": "sha512", 00:21:24.211 "dhgroup": "ffdhe8192" 00:21:24.211 } 00:21:24.211 } 00:21:24.211 ]' 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.211 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.470 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:21:24.470 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: --dhchap-ctrl-secret DHHC-1:02:Nzk0Zjc2MDc3NDIyNTA3OWUxNzI5ZDNlZTVlMDgzNTdhMmYzMjExYzg3NmE2YmQw3CciCg==: 00:21:25.038 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.039 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.039 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.039 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.039 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.039 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.039 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:25.039 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:25.297 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:25.297 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.297 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.297 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:25.297 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:25.297 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.297 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.297 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.297 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.297 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.297 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.297 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.297 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.864 00:21:25.864 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.864 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.864 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.123 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.123 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.123 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.123 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.123 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.123 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.123 { 00:21:26.123 "cntlid": 141, 00:21:26.123 "qid": 0, 00:21:26.123 "state": "enabled", 00:21:26.123 "thread": "nvmf_tgt_poll_group_000", 00:21:26.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:26.123 "listen_address": { 00:21:26.123 "trtype": "TCP", 00:21:26.123 "adrfam": "IPv4", 00:21:26.123 "traddr": "10.0.0.2", 00:21:26.123 "trsvcid": "4420" 00:21:26.123 }, 00:21:26.123 "peer_address": { 00:21:26.123 "trtype": "TCP", 00:21:26.123 "adrfam": "IPv4", 00:21:26.123 "traddr": "10.0.0.1", 00:21:26.123 "trsvcid": "44544" 00:21:26.123 }, 00:21:26.123 "auth": { 00:21:26.123 "state": "completed", 00:21:26.123 "digest": "sha512", 00:21:26.123 "dhgroup": "ffdhe8192" 00:21:26.123 } 00:21:26.123 } 00:21:26.123 ]' 00:21:26.123 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.123 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.123 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.123 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.123 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.123 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.123 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.123 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.382 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:21:26.382 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:01:YTM2NjMxNDU0YTBlNmZmZjE5OGRhZDcyZDI5MTk3ZGQ9linJ: 00:21:26.950 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.950 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:26.950 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.950 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.950 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.950 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.950 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:26.950 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:27.209 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:27.209 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.209 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.209 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:27.209 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:27.209 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.209 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:27.209 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.209 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.209 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.209 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:27.209 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.209 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.777 00:21:27.777 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.777 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.777 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.777 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.777 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.777 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.777 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.777 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.777 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.777 { 00:21:27.777 "cntlid": 143, 00:21:27.777 "qid": 0, 00:21:27.777 "state": "enabled", 00:21:27.777 "thread": "nvmf_tgt_poll_group_000", 00:21:27.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.777 "listen_address": { 00:21:27.777 "trtype": "TCP", 00:21:27.777 "adrfam": "IPv4", 00:21:27.777 "traddr": "10.0.0.2", 00:21:27.777 "trsvcid": "4420" 00:21:27.777 }, 00:21:27.777 "peer_address": { 00:21:27.777 "trtype": "TCP", 00:21:27.777 "adrfam": "IPv4", 00:21:27.777 "traddr": "10.0.0.1", 00:21:27.777 "trsvcid": "53888" 00:21:27.777 }, 00:21:27.777 "auth": { 00:21:27.777 "state": "completed", 00:21:27.777 "digest": "sha512", 00:21:27.777 "dhgroup": "ffdhe8192" 00:21:27.777 } 00:21:27.777 } 00:21:27.777 ]' 00:21:27.777 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.777 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.777 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.036 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:28.036 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.036 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.036 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.036 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.295 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:28.295 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.864 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.431 00:21:29.431 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.431 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.431 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.690 { 00:21:29.690 "cntlid": 145, 00:21:29.690 "qid": 0, 00:21:29.690 "state": "enabled", 00:21:29.690 "thread": "nvmf_tgt_poll_group_000", 00:21:29.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:29.690 "listen_address": { 00:21:29.690 "trtype": "TCP", 00:21:29.690 "adrfam": "IPv4", 00:21:29.690 "traddr": "10.0.0.2", 00:21:29.690 "trsvcid": "4420" 00:21:29.690 }, 00:21:29.690 "peer_address": { 00:21:29.690 "trtype": "TCP", 00:21:29.690 "adrfam": "IPv4", 00:21:29.690 "traddr": "10.0.0.1", 00:21:29.690 "trsvcid": "53908" 00:21:29.690 }, 00:21:29.690 "auth": { 00:21:29.690 "state": "completed", 00:21:29.690 "digest": "sha512", 00:21:29.690 "dhgroup": "ffdhe8192" 00:21:29.690 } 00:21:29.690 } 00:21:29.690 ]' 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.690 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.949 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:21:29.949 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2MyMmE5Mzc2MGYxMTFkMDQ3ZGJhZDNmNGRiZjE3MzQ4N2U1ZDk0NTUzMGM4NGIwp1HX7Q==: --dhchap-ctrl-secret DHHC-1:03:OTk1MjczZTZmOTMwNTk3OGQ5OTE1OWZkNzlmYWYwY2RhNDZjMjExNmE5YThiNTlhMmQ1NjFmNzFlMWExY2M5ZNuJBNw=: 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:30.517 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:31.085 request: 00:21:31.085 { 00:21:31.085 "name": "nvme0", 00:21:31.085 "trtype": "tcp", 00:21:31.085 "traddr": "10.0.0.2", 00:21:31.085 "adrfam": "ipv4", 00:21:31.085 "trsvcid": "4420", 00:21:31.085 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:31.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:31.085 "prchk_reftag": false, 00:21:31.085 "prchk_guard": false, 00:21:31.085 "hdgst": false, 00:21:31.085 "ddgst": false, 00:21:31.085 "dhchap_key": "key2", 00:21:31.085 "allow_unrecognized_csi": false, 00:21:31.085 "method": "bdev_nvme_attach_controller", 00:21:31.085 "req_id": 1 00:21:31.085 } 00:21:31.085 Got JSON-RPC error response 00:21:31.085 response: 00:21:31.085 { 00:21:31.085 "code": -5, 00:21:31.085 "message": "Input/output error" 00:21:31.085 } 00:21:31.085 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:31.085 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:31.086 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:31.653 request: 00:21:31.653 { 00:21:31.654 "name": "nvme0", 00:21:31.654 "trtype": "tcp", 00:21:31.654 "traddr": "10.0.0.2", 00:21:31.654 "adrfam": "ipv4", 00:21:31.654 "trsvcid": "4420", 00:21:31.654 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:31.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:31.654 "prchk_reftag": false, 00:21:31.654 "prchk_guard": false, 00:21:31.654 "hdgst": false, 00:21:31.654 "ddgst": false, 00:21:31.654 "dhchap_key": "key1", 00:21:31.654 "dhchap_ctrlr_key": "ckey2", 00:21:31.654 "allow_unrecognized_csi": false, 00:21:31.654 "method": "bdev_nvme_attach_controller", 00:21:31.654 "req_id": 1 00:21:31.654 } 00:21:31.654 Got JSON-RPC error response 00:21:31.654 response: 00:21:31.654 { 00:21:31.654 "code": -5, 00:21:31.654 "message": "Input/output error" 00:21:31.654 } 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.654 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.913 request: 00:21:31.913 { 00:21:31.913 "name": "nvme0", 00:21:31.913 "trtype": "tcp", 00:21:31.913 "traddr": "10.0.0.2", 00:21:31.913 "adrfam": "ipv4", 00:21:31.913 "trsvcid": "4420", 00:21:31.913 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:31.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:31.913 "prchk_reftag": false, 00:21:31.913 "prchk_guard": false, 00:21:31.913 "hdgst": false, 00:21:31.913 "ddgst": false, 00:21:31.913 "dhchap_key": "key1", 00:21:31.913 "dhchap_ctrlr_key": "ckey1", 00:21:31.913 "allow_unrecognized_csi": false, 00:21:31.913 "method": "bdev_nvme_attach_controller", 00:21:31.913 "req_id": 1 00:21:31.913 } 00:21:31.913 Got JSON-RPC error response 00:21:31.913 response: 00:21:31.913 { 00:21:31.913 "code": -5, 00:21:31.913 "message": "Input/output error" 00:21:31.913 } 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 313939 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 313939 ']' 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 313939 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 313939 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 313939' 00:21:31.913 killing process with pid 313939 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 313939 00:21:31.913 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 313939 00:21:32.172 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:32.172 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:32.172 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:32.172 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.172 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=336051 00:21:32.172 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 336051 00:21:32.172 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:32.172 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 336051 ']' 00:21:32.172 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.172 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.172 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.172 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.172 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 336051 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 336051 ']' 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.431 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.690 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.690 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:32.690 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:32.690 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.691 null0 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rzN 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.iFR ]] 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iFR 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.VYA 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Djx ]] 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Djx 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.1WW 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.gjU ]] 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gjU 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Z19 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.691 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.950 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.950 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.950 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:32.950 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.950 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.950 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.950 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.950 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.950 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.517 nvme0n1 00:21:33.517 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.517 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.517 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.776 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.776 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.776 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.776 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.776 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.776 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.776 { 00:21:33.776 "cntlid": 1, 00:21:33.776 "qid": 0, 00:21:33.776 "state": "enabled", 00:21:33.776 "thread": "nvmf_tgt_poll_group_000", 00:21:33.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:33.776 "listen_address": { 00:21:33.776 "trtype": "TCP", 00:21:33.776 "adrfam": "IPv4", 00:21:33.776 "traddr": "10.0.0.2", 00:21:33.776 "trsvcid": "4420" 00:21:33.776 }, 00:21:33.776 "peer_address": { 00:21:33.776 "trtype": "TCP", 00:21:33.776 "adrfam": "IPv4", 00:21:33.776 "traddr": "10.0.0.1", 00:21:33.776 "trsvcid": "53960" 00:21:33.776 }, 00:21:33.776 "auth": { 00:21:33.776 "state": "completed", 00:21:33.776 "digest": "sha512", 00:21:33.776 "dhgroup": "ffdhe8192" 00:21:33.776 } 00:21:33.776 } 00:21:33.776 ]' 00:21:33.776 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.776 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.776 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.776 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.776 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.034 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.034 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.034 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.034 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:34.034 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:34.601 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.601 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.601 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.601 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.601 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.601 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:34.601 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.601 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.601 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.601 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:34.601 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:34.859 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:34.859 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:34.859 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:34.860 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:34.860 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.860 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:34.860 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.860 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.860 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.860 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.118 request: 00:21:35.118 { 00:21:35.118 "name": "nvme0", 00:21:35.118 "trtype": "tcp", 00:21:35.118 "traddr": "10.0.0.2", 00:21:35.118 "adrfam": "ipv4", 00:21:35.118 "trsvcid": "4420", 00:21:35.118 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:35.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:35.118 "prchk_reftag": false, 00:21:35.118 "prchk_guard": false, 00:21:35.118 "hdgst": false, 00:21:35.118 "ddgst": false, 00:21:35.118 "dhchap_key": "key3", 00:21:35.118 "allow_unrecognized_csi": false, 00:21:35.118 "method": "bdev_nvme_attach_controller", 00:21:35.118 "req_id": 1 00:21:35.118 } 00:21:35.118 Got JSON-RPC error response 00:21:35.118 response: 00:21:35.118 { 00:21:35.118 "code": -5, 00:21:35.118 "message": "Input/output error" 00:21:35.118 } 00:21:35.118 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:35.118 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.118 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.118 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.118 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:35.118 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:35.118 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:35.118 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:35.377 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:35.377 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:35.377 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:35.377 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:35.377 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.377 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:35.377 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.377 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:35.377 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.377 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.377 request: 00:21:35.377 { 00:21:35.377 "name": "nvme0", 00:21:35.377 "trtype": "tcp", 00:21:35.377 "traddr": "10.0.0.2", 00:21:35.377 "adrfam": "ipv4", 00:21:35.377 "trsvcid": "4420", 00:21:35.377 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:35.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:35.377 "prchk_reftag": false, 00:21:35.377 "prchk_guard": false, 00:21:35.377 "hdgst": false, 00:21:35.377 "ddgst": false, 00:21:35.377 "dhchap_key": "key3", 00:21:35.377 "allow_unrecognized_csi": false, 00:21:35.377 "method": "bdev_nvme_attach_controller", 00:21:35.377 "req_id": 1 00:21:35.377 } 00:21:35.377 Got JSON-RPC error response 00:21:35.377 response: 00:21:35.377 { 00:21:35.377 "code": -5, 00:21:35.377 "message": "Input/output error" 00:21:35.377 } 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.635 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:36.203 request: 00:21:36.203 { 00:21:36.203 "name": "nvme0", 00:21:36.203 "trtype": "tcp", 00:21:36.203 "traddr": "10.0.0.2", 00:21:36.203 "adrfam": "ipv4", 00:21:36.203 "trsvcid": "4420", 00:21:36.203 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:36.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:36.203 "prchk_reftag": false, 00:21:36.203 "prchk_guard": false, 00:21:36.203 "hdgst": false, 00:21:36.203 "ddgst": false, 00:21:36.203 "dhchap_key": "key0", 00:21:36.203 "dhchap_ctrlr_key": "key1", 00:21:36.203 "allow_unrecognized_csi": false, 00:21:36.203 "method": "bdev_nvme_attach_controller", 00:21:36.203 "req_id": 1 00:21:36.203 } 00:21:36.203 Got JSON-RPC error response 00:21:36.203 response: 00:21:36.203 { 00:21:36.203 "code": -5, 00:21:36.203 "message": "Input/output error" 00:21:36.203 } 00:21:36.203 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:36.203 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.203 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.203 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.203 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:36.203 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:36.203 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:36.464 nvme0n1 00:21:36.464 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:36.464 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:36.464 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.464 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.464 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.464 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.722 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:36.722 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.722 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.722 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.722 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:36.722 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:36.722 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:37.658 nvme0n1 00:21:37.658 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:37.658 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:37.658 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.658 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.658 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:37.658 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.658 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.658 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.658 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:37.658 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:37.658 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.917 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.917 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:37.917 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: --dhchap-ctrl-secret DHHC-1:03:YzE0NGI3NDJhMDJjOTcwYzZhYzExNThlOTUwMmEyOGY4NDI1NzFhYzc5ZWViM2IzMzRlNWRiZTJjYTY1ZmE4NBJ0s7I=: 00:21:38.485 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:38.485 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:38.485 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:38.485 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:38.485 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:38.485 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:38.485 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:38.485 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.485 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.744 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:38.744 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:38.744 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:38.744 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:38.744 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.744 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:38.744 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.744 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:38.744 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:38.744 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:39.003 request: 00:21:39.003 { 00:21:39.003 "name": "nvme0", 00:21:39.003 "trtype": "tcp", 00:21:39.003 "traddr": "10.0.0.2", 00:21:39.003 "adrfam": "ipv4", 00:21:39.003 "trsvcid": "4420", 00:21:39.003 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:39.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:39.003 "prchk_reftag": false, 00:21:39.003 "prchk_guard": false, 00:21:39.003 "hdgst": false, 00:21:39.003 "ddgst": false, 00:21:39.003 "dhchap_key": "key1", 00:21:39.003 "allow_unrecognized_csi": false, 00:21:39.003 "method": "bdev_nvme_attach_controller", 00:21:39.003 "req_id": 1 00:21:39.003 } 00:21:39.003 Got JSON-RPC error response 00:21:39.003 response: 00:21:39.003 { 00:21:39.003 "code": -5, 00:21:39.003 "message": "Input/output error" 00:21:39.003 } 00:21:39.003 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:39.003 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:39.003 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:39.003 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:39.003 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:39.003 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:39.003 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:39.939 nvme0n1 00:21:39.939 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:39.939 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.939 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:39.939 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.939 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.939 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.198 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:40.198 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.198 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.198 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.198 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:40.198 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:40.198 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:40.457 nvme0n1 00:21:40.457 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:40.457 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:40.457 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.716 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.716 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.716 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: '' 2s 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: ]] 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTc0YWM3MjQxYWQ5ODgwZGJkZWY0N2I2OGZkMzE0OGKT21qS: 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:40.975 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: 2s 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: ]] 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MzMxMTU3M2Q4ZDdhNWZjNmMyMjI3YjA5MDljMmNhNGU2OTY3ODQzNDEwZGQwNWM2zlgrgA==: 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:42.879 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:45.409 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:45.976 nvme0n1 00:21:45.976 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:45.976 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.976 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.976 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.976 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:45.976 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:46.235 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:46.235 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:46.235 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.493 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.493 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:46.493 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.493 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.493 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.493 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:46.493 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:46.752 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:46.752 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:46.752 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:47.011 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:47.270 request: 00:21:47.270 { 00:21:47.270 "name": "nvme0", 00:21:47.270 "dhchap_key": "key1", 00:21:47.270 "dhchap_ctrlr_key": "key3", 00:21:47.270 "method": "bdev_nvme_set_keys", 00:21:47.270 "req_id": 1 00:21:47.270 } 00:21:47.270 Got JSON-RPC error response 00:21:47.270 response: 00:21:47.270 { 00:21:47.270 "code": -13, 00:21:47.270 "message": "Permission denied" 00:21:47.270 } 00:21:47.270 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:47.270 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.270 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.270 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.270 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:47.270 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.270 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:47.529 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:47.529 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:48.465 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:48.465 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:48.465 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.724 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:48.724 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:48.724 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.724 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.724 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.724 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:48.724 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:48.724 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:49.660 nvme0n1 00:21:49.660 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:49.660 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.660 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.660 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.660 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:49.660 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:49.660 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:49.660 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:49.660 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.660 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:49.660 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.660 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:49.660 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:49.919 request: 00:21:49.919 { 00:21:49.919 "name": "nvme0", 00:21:49.919 "dhchap_key": "key2", 00:21:49.919 "dhchap_ctrlr_key": "key0", 00:21:49.919 "method": "bdev_nvme_set_keys", 00:21:49.919 "req_id": 1 00:21:49.919 } 00:21:49.919 Got JSON-RPC error response 00:21:49.919 response: 00:21:49.919 { 00:21:49.919 "code": -13, 00:21:49.919 "message": "Permission denied" 00:21:49.919 } 00:21:49.919 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:49.919 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:49.919 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:49.919 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:49.919 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:49.919 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:49.919 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.178 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:50.179 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:51.114 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:51.115 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:51.115 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 313960 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 313960 ']' 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 313960 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 313960 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 313960' 00:21:51.373 killing process with pid 313960 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 313960 00:21:51.373 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 313960 00:21:51.632 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:51.632 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:51.632 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:51.632 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.632 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:51.632 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.632 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.632 rmmod nvme_tcp 00:21:51.632 rmmod nvme_fabrics 00:21:51.891 rmmod nvme_keyring 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 336051 ']' 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 336051 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 336051 ']' 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 336051 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336051 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336051' 00:21:51.891 killing process with pid 336051 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 336051 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 336051 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.891 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.429 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.429 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.rzN /tmp/spdk.key-sha256.VYA /tmp/spdk.key-sha384.1WW /tmp/spdk.key-sha512.Z19 /tmp/spdk.key-sha512.iFR /tmp/spdk.key-sha384.Djx /tmp/spdk.key-sha256.gjU '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:54.429 00:21:54.429 real 2m34.380s 00:21:54.429 user 5m55.099s 00:21:54.429 sys 0m24.226s 00:21:54.429 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.429 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.429 ************************************ 00:21:54.429 END TEST nvmf_auth_target 00:21:54.429 ************************************ 00:21:54.429 05:37:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:54.429 05:37:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:54.429 05:37:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:54.429 05:37:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.429 05:37:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:54.429 ************************************ 00:21:54.429 START TEST nvmf_bdevio_no_huge 00:21:54.429 ************************************ 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:54.429 * Looking for test storage... 00:21:54.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:54.429 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:54.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.430 --rc genhtml_branch_coverage=1 00:21:54.430 --rc genhtml_function_coverage=1 00:21:54.430 --rc genhtml_legend=1 00:21:54.430 --rc geninfo_all_blocks=1 00:21:54.430 --rc geninfo_unexecuted_blocks=1 00:21:54.430 00:21:54.430 ' 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:54.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.430 --rc genhtml_branch_coverage=1 00:21:54.430 --rc genhtml_function_coverage=1 00:21:54.430 --rc genhtml_legend=1 00:21:54.430 --rc geninfo_all_blocks=1 00:21:54.430 --rc geninfo_unexecuted_blocks=1 00:21:54.430 00:21:54.430 ' 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:54.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.430 --rc genhtml_branch_coverage=1 00:21:54.430 --rc genhtml_function_coverage=1 00:21:54.430 --rc genhtml_legend=1 00:21:54.430 --rc geninfo_all_blocks=1 00:21:54.430 --rc geninfo_unexecuted_blocks=1 00:21:54.430 00:21:54.430 ' 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:54.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.430 --rc genhtml_branch_coverage=1 00:21:54.430 --rc genhtml_function_coverage=1 00:21:54.430 --rc genhtml_legend=1 00:21:54.430 --rc geninfo_all_blocks=1 00:21:54.430 --rc geninfo_unexecuted_blocks=1 00:21:54.430 00:21:54.430 ' 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.430 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.000 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:01.001 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:01.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:01.001 Found net devices under 0000:af:00.0: cvl_0_0 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:01.001 Found net devices under 0000:af:00.1: cvl_0_1 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.001 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:22:01.001 00:22:01.001 --- 10.0.0.2 ping statistics --- 00:22:01.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.001 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:22:01.001 00:22:01.001 --- 10.0.0.1 ping statistics --- 00:22:01.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.001 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:01.001 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=342777 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 342777 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 342777 ']' 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:01.002 [2024-12-13 05:38:00.192432] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:01.002 [2024-12-13 05:38:00.192484] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:01.002 [2024-12-13 05:38:00.272815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:01.002 [2024-12-13 05:38:00.307727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.002 [2024-12-13 05:38:00.307759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.002 [2024-12-13 05:38:00.307765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.002 [2024-12-13 05:38:00.307771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.002 [2024-12-13 05:38:00.307776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.002 [2024-12-13 05:38:00.308874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:01.002 [2024-12-13 05:38:00.308983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:22:01.002 [2024-12-13 05:38:00.309086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:01.002 [2024-12-13 05:38:00.309087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:01.002 [2024-12-13 05:38:00.461082] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:01.002 Malloc0 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:01.002 [2024-12-13 05:38:00.505394] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.002 { 00:22:01.002 "params": { 00:22:01.002 "name": "Nvme$subsystem", 00:22:01.002 "trtype": "$TEST_TRANSPORT", 00:22:01.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.002 "adrfam": "ipv4", 00:22:01.002 "trsvcid": "$NVMF_PORT", 00:22:01.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.002 "hdgst": ${hdgst:-false}, 00:22:01.002 "ddgst": ${ddgst:-false} 00:22:01.002 }, 00:22:01.002 "method": "bdev_nvme_attach_controller" 00:22:01.002 } 00:22:01.002 EOF 00:22:01.002 )") 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:01.002 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:01.002 "params": { 00:22:01.002 "name": "Nvme1", 00:22:01.002 "trtype": "tcp", 00:22:01.002 "traddr": "10.0.0.2", 00:22:01.002 "adrfam": "ipv4", 00:22:01.002 "trsvcid": "4420", 00:22:01.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.002 "hdgst": false, 00:22:01.002 "ddgst": false 00:22:01.002 }, 00:22:01.002 "method": "bdev_nvme_attach_controller" 00:22:01.002 }' 00:22:01.002 [2024-12-13 05:38:00.555416] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:01.002 [2024-12-13 05:38:00.555468] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid342812 ] 00:22:01.002 [2024-12-13 05:38:00.631814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:01.002 [2024-12-13 05:38:00.669119] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.002 [2024-12-13 05:38:00.669225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.002 [2024-12-13 05:38:00.669226] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.002 I/O targets: 00:22:01.002 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:01.002 00:22:01.002 00:22:01.002 CUnit - A unit testing framework for C - Version 2.1-3 00:22:01.002 http://cunit.sourceforge.net/ 00:22:01.002 00:22:01.002 00:22:01.002 Suite: bdevio tests on: Nvme1n1 00:22:01.261 Test: blockdev write read block ...passed 00:22:01.261 Test: blockdev write zeroes read block ...passed 00:22:01.261 Test: blockdev write zeroes read no split ...passed 00:22:01.261 Test: blockdev write zeroes read split ...passed 00:22:01.261 Test: blockdev write zeroes read split partial ...passed 00:22:01.261 Test: blockdev reset ...[2024-12-13 05:38:01.201213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:01.261 [2024-12-13 05:38:01.201274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd8ea0 (9): Bad file descriptor 00:22:01.261 [2024-12-13 05:38:01.213652] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:01.261 passed 00:22:01.261 Test: blockdev write read 8 blocks ...passed 00:22:01.261 Test: blockdev write read size > 128k ...passed 00:22:01.261 Test: blockdev write read invalid size ...passed 00:22:01.261 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:01.261 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:01.261 Test: blockdev write read max offset ...passed 00:22:01.519 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:01.519 Test: blockdev writev readv 8 blocks ...passed 00:22:01.519 Test: blockdev writev readv 30 x 1block ...passed 00:22:01.519 Test: blockdev writev readv block ...passed 00:22:01.519 Test: blockdev writev readv size > 128k ...passed 00:22:01.519 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:01.519 Test: blockdev comparev and writev ...[2024-12-13 05:38:01.425170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:01.519 [2024-12-13 05:38:01.425202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.519 [2024-12-13 05:38:01.425217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:01.519 [2024-12-13 05:38:01.425224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.520 [2024-12-13 05:38:01.425461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:01.520 [2024-12-13 05:38:01.425472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:01.520 [2024-12-13 05:38:01.425483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:01.520 [2024-12-13 05:38:01.425490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:01.520 [2024-12-13 05:38:01.425714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:01.520 [2024-12-13 05:38:01.425725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:01.520 [2024-12-13 05:38:01.425736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:01.520 [2024-12-13 05:38:01.425743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:01.520 [2024-12-13 05:38:01.425969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:01.520 [2024-12-13 05:38:01.425982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:01.520 [2024-12-13 05:38:01.425995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:01.520 [2024-12-13 05:38:01.426002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:01.520 passed 00:22:01.520 Test: blockdev nvme passthru rw ...passed 00:22:01.520 Test: blockdev nvme passthru vendor specific ...[2024-12-13 05:38:01.508780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.520 [2024-12-13 05:38:01.508797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:01.520 [2024-12-13 05:38:01.508895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.520 [2024-12-13 05:38:01.508905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:01.520 [2024-12-13 05:38:01.509003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.520 [2024-12-13 05:38:01.509016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:01.520 [2024-12-13 05:38:01.509115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.520 [2024-12-13 05:38:01.509125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:01.520 passed 00:22:01.520 Test: blockdev nvme admin passthru ...passed 00:22:01.779 Test: blockdev copy ...passed 00:22:01.779 00:22:01.779 Run Summary: Type Total Ran Passed Failed Inactive 00:22:01.779 suites 1 1 n/a 0 0 00:22:01.779 tests 23 23 23 0 0 00:22:01.779 asserts 152 152 152 0 n/a 00:22:01.779 00:22:01.779 Elapsed time = 1.145 seconds 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:02.038 rmmod nvme_tcp 00:22:02.038 rmmod nvme_fabrics 00:22:02.038 rmmod nvme_keyring 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 342777 ']' 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 342777 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 342777 ']' 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 342777 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 342777 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 342777' 00:22:02.038 killing process with pid 342777 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 342777 00:22:02.038 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 342777 00:22:02.297 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:02.297 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:02.297 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:02.297 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:02.297 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:02.297 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:02.297 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:02.297 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:02.297 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:02.297 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.297 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.297 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:04.833 00:22:04.833 real 0m10.273s 00:22:04.833 user 0m11.655s 00:22:04.833 sys 0m5.276s 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:04.833 ************************************ 00:22:04.833 END TEST nvmf_bdevio_no_huge 00:22:04.833 ************************************ 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:04.833 ************************************ 00:22:04.833 START TEST nvmf_tls 00:22:04.833 ************************************ 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:04.833 * Looking for test storage... 00:22:04.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:04.833 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:04.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.834 --rc genhtml_branch_coverage=1 00:22:04.834 --rc genhtml_function_coverage=1 00:22:04.834 --rc genhtml_legend=1 00:22:04.834 --rc geninfo_all_blocks=1 00:22:04.834 --rc geninfo_unexecuted_blocks=1 00:22:04.834 00:22:04.834 ' 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:04.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.834 --rc genhtml_branch_coverage=1 00:22:04.834 --rc genhtml_function_coverage=1 00:22:04.834 --rc genhtml_legend=1 00:22:04.834 --rc geninfo_all_blocks=1 00:22:04.834 --rc geninfo_unexecuted_blocks=1 00:22:04.834 00:22:04.834 ' 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:04.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.834 --rc genhtml_branch_coverage=1 00:22:04.834 --rc genhtml_function_coverage=1 00:22:04.834 --rc genhtml_legend=1 00:22:04.834 --rc geninfo_all_blocks=1 00:22:04.834 --rc geninfo_unexecuted_blocks=1 00:22:04.834 00:22:04.834 ' 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:04.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.834 --rc genhtml_branch_coverage=1 00:22:04.834 --rc genhtml_function_coverage=1 00:22:04.834 --rc genhtml_legend=1 00:22:04.834 --rc geninfo_all_blocks=1 00:22:04.834 --rc geninfo_unexecuted_blocks=1 00:22:04.834 00:22:04.834 ' 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:04.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:04.834 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:11.406 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:11.406 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:11.406 Found net devices under 0000:af:00.0: cvl_0_0 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:11.406 Found net devices under 0000:af:00.1: cvl_0_1 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:11.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:22:11.406 00:22:11.406 --- 10.0.0.2 ping statistics --- 00:22:11.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.406 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:22:11.406 00:22:11.406 --- 10.0.0.1 ping statistics --- 00:22:11.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.406 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.406 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=346503 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 346503 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 346503 ']' 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.407 [2024-12-13 05:38:10.648543] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:11.407 [2024-12-13 05:38:10.648589] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.407 [2024-12-13 05:38:10.730345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.407 [2024-12-13 05:38:10.751441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.407 [2024-12-13 05:38:10.751480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.407 [2024-12-13 05:38:10.751487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.407 [2024-12-13 05:38:10.751493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.407 [2024-12-13 05:38:10.751498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.407 [2024-12-13 05:38:10.751957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:11.407 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:11.407 true 00:22:11.407 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:11.407 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:11.407 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:11.407 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:11.407 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:11.407 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:11.407 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:11.666 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:11.666 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:11.666 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:11.924 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:11.924 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:12.183 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:12.183 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:12.183 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:12.183 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:12.183 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:12.183 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:12.183 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:12.442 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:12.442 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:12.701 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:12.701 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:12.701 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:12.960 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:13.219 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:13.219 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:13.219 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.sOkPLUf3g9 00:22:13.219 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:13.219 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.JVMOXBRpTI 00:22:13.219 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:13.219 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:13.219 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.sOkPLUf3g9 00:22:13.219 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.JVMOXBRpTI 00:22:13.219 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:13.219 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:13.478 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.sOkPLUf3g9 00:22:13.478 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.sOkPLUf3g9 00:22:13.478 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:13.736 [2024-12-13 05:38:13.613471] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.736 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:13.995 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:13.995 [2024-12-13 05:38:13.974379] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:13.995 [2024-12-13 05:38:13.974583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.995 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:14.254 malloc0 00:22:14.254 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:14.512 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.sOkPLUf3g9 00:22:14.512 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:14.771 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.sOkPLUf3g9 00:22:24.904 Initializing NVMe Controllers 00:22:24.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:24.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:24.904 Initialization complete. Launching workers. 00:22:24.904 ======================================================== 00:22:24.904 Latency(us) 00:22:24.904 Device Information : IOPS MiB/s Average min max 00:22:24.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17022.37 66.49 3759.82 817.04 4852.68 00:22:24.904 ======================================================== 00:22:24.904 Total : 17022.37 66.49 3759.82 817.04 4852.68 00:22:24.904 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sOkPLUf3g9 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sOkPLUf3g9 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=348848 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 348848 /var/tmp/bdevperf.sock 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 348848 ']' 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.904 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.904 [2024-12-13 05:38:24.895202] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:24.904 [2024-12-13 05:38:24.895252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid348848 ] 00:22:25.190 [2024-12-13 05:38:24.970917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.190 [2024-12-13 05:38:24.993360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.190 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.190 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:25.190 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sOkPLUf3g9 00:22:25.472 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:25.472 [2024-12-13 05:38:25.440403] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.770 TLSTESTn1 00:22:25.770 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:25.770 Running I/O for 10 seconds... 00:22:27.870 5316.00 IOPS, 20.77 MiB/s [2024-12-13T04:38:28.821Z] 5390.50 IOPS, 21.06 MiB/s [2024-12-13T04:38:29.759Z] 5382.67 IOPS, 21.03 MiB/s [2024-12-13T04:38:30.696Z] 5203.00 IOPS, 20.32 MiB/s [2024-12-13T04:38:32.074Z] 5259.00 IOPS, 20.54 MiB/s [2024-12-13T04:38:32.641Z] 5288.50 IOPS, 20.66 MiB/s [2024-12-13T04:38:34.018Z] 5315.57 IOPS, 20.76 MiB/s [2024-12-13T04:38:34.953Z] 5332.50 IOPS, 20.83 MiB/s [2024-12-13T04:38:35.889Z] 5352.56 IOPS, 20.91 MiB/s [2024-12-13T04:38:35.889Z] 5353.30 IOPS, 20.91 MiB/s 00:22:35.874 Latency(us) 00:22:35.874 [2024-12-13T04:38:35.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.874 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:35.874 Verification LBA range: start 0x0 length 0x2000 00:22:35.874 TLSTESTn1 : 10.01 5359.08 20.93 0.00 0.00 23850.81 4930.80 32455.92 00:22:35.874 [2024-12-13T04:38:35.889Z] =================================================================================================================== 00:22:35.874 [2024-12-13T04:38:35.889Z] Total : 5359.08 20.93 0.00 0.00 23850.81 4930.80 32455.92 00:22:35.874 { 00:22:35.874 "results": [ 00:22:35.874 { 00:22:35.874 "job": "TLSTESTn1", 00:22:35.874 "core_mask": "0x4", 00:22:35.874 "workload": "verify", 00:22:35.874 "status": "finished", 00:22:35.874 "verify_range": { 00:22:35.874 "start": 0, 00:22:35.874 "length": 8192 00:22:35.874 }, 00:22:35.874 "queue_depth": 128, 00:22:35.874 "io_size": 4096, 00:22:35.874 "runtime": 10.012734, 00:22:35.874 "iops": 5359.075752936211, 00:22:35.874 "mibps": 20.933889659907074, 00:22:35.874 "io_failed": 0, 00:22:35.874 "io_timeout": 0, 00:22:35.874 "avg_latency_us": 23850.812959650848, 00:22:35.874 "min_latency_us": 4930.80380952381, 00:22:35.874 "max_latency_us": 32455.92380952381 00:22:35.874 } 00:22:35.874 ], 00:22:35.874 "core_count": 1 00:22:35.874 } 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 348848 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 348848 ']' 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 348848 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 348848 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 348848' 00:22:35.874 killing process with pid 348848 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 348848 00:22:35.874 Received shutdown signal, test time was about 10.000000 seconds 00:22:35.874 00:22:35.874 Latency(us) 00:22:35.874 [2024-12-13T04:38:35.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.874 [2024-12-13T04:38:35.889Z] =================================================================================================================== 00:22:35.874 [2024-12-13T04:38:35.889Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 348848 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JVMOXBRpTI 00:22:35.874 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JVMOXBRpTI 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JVMOXBRpTI 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JVMOXBRpTI 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=350608 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:35.875 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.134 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 350608 /var/tmp/bdevperf.sock 00:22:36.134 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 350608 ']' 00:22:36.134 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.134 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.134 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.134 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.134 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.134 [2024-12-13 05:38:35.934665] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:36.134 [2024-12-13 05:38:35.934719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350608 ] 00:22:36.134 [2024-12-13 05:38:36.009251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.134 [2024-12-13 05:38:36.028891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.134 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.134 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:36.134 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JVMOXBRpTI 00:22:36.392 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:36.652 [2024-12-13 05:38:36.519905] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.652 [2024-12-13 05:38:36.524521] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:36.652 [2024-12-13 05:38:36.525130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227f340 (107): Transport endpoint is not connected 00:22:36.652 [2024-12-13 05:38:36.526122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227f340 (9): Bad file descriptor 00:22:36.652 [2024-12-13 05:38:36.527123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:36.652 [2024-12-13 05:38:36.527133] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:36.652 [2024-12-13 05:38:36.527141] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:36.652 [2024-12-13 05:38:36.527149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:36.652 request: 00:22:36.652 { 00:22:36.652 "name": "TLSTEST", 00:22:36.652 "trtype": "tcp", 00:22:36.652 "traddr": "10.0.0.2", 00:22:36.652 "adrfam": "ipv4", 00:22:36.652 "trsvcid": "4420", 00:22:36.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.652 "prchk_reftag": false, 00:22:36.652 "prchk_guard": false, 00:22:36.652 "hdgst": false, 00:22:36.652 "ddgst": false, 00:22:36.652 "psk": "key0", 00:22:36.652 "allow_unrecognized_csi": false, 00:22:36.652 "method": "bdev_nvme_attach_controller", 00:22:36.652 "req_id": 1 00:22:36.652 } 00:22:36.652 Got JSON-RPC error response 00:22:36.652 response: 00:22:36.652 { 00:22:36.652 "code": -5, 00:22:36.652 "message": "Input/output error" 00:22:36.652 } 00:22:36.652 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 350608 00:22:36.652 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 350608 ']' 00:22:36.652 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 350608 00:22:36.652 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:36.652 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.652 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 350608 00:22:36.652 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:36.652 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:36.652 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 350608' 00:22:36.652 killing process with pid 350608 00:22:36.652 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 350608 00:22:36.652 Received shutdown signal, test time was about 10.000000 seconds 00:22:36.652 00:22:36.652 Latency(us) 00:22:36.652 [2024-12-13T04:38:36.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.652 [2024-12-13T04:38:36.667Z] =================================================================================================================== 00:22:36.652 [2024-12-13T04:38:36.667Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:36.652 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 350608 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sOkPLUf3g9 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sOkPLUf3g9 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sOkPLUf3g9 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sOkPLUf3g9 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=350837 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 350837 /var/tmp/bdevperf.sock 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 350837 ']' 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.912 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.912 [2024-12-13 05:38:36.796589] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:36.912 [2024-12-13 05:38:36.796643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350837 ] 00:22:36.912 [2024-12-13 05:38:36.868489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.912 [2024-12-13 05:38:36.887499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.171 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.171 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:37.171 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sOkPLUf3g9 00:22:37.171 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:37.430 [2024-12-13 05:38:37.338191] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.430 [2024-12-13 05:38:37.345221] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:37.431 [2024-12-13 05:38:37.345242] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:37.431 [2024-12-13 05:38:37.345264] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:37.431 [2024-12-13 05:38:37.345465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf76340 (107): Transport endpoint is not connected 00:22:37.431 [2024-12-13 05:38:37.346459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf76340 (9): Bad file descriptor 00:22:37.431 [2024-12-13 05:38:37.347461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:37.431 [2024-12-13 05:38:37.347471] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:37.431 [2024-12-13 05:38:37.347478] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:37.431 [2024-12-13 05:38:37.347487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:37.431 request: 00:22:37.431 { 00:22:37.431 "name": "TLSTEST", 00:22:37.431 "trtype": "tcp", 00:22:37.431 "traddr": "10.0.0.2", 00:22:37.431 "adrfam": "ipv4", 00:22:37.431 "trsvcid": "4420", 00:22:37.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.431 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:37.431 "prchk_reftag": false, 00:22:37.431 "prchk_guard": false, 00:22:37.431 "hdgst": false, 00:22:37.431 "ddgst": false, 00:22:37.431 "psk": "key0", 00:22:37.431 "allow_unrecognized_csi": false, 00:22:37.431 "method": "bdev_nvme_attach_controller", 00:22:37.431 "req_id": 1 00:22:37.431 } 00:22:37.431 Got JSON-RPC error response 00:22:37.431 response: 00:22:37.431 { 00:22:37.431 "code": -5, 00:22:37.431 "message": "Input/output error" 00:22:37.431 } 00:22:37.431 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 350837 00:22:37.431 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 350837 ']' 00:22:37.431 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 350837 00:22:37.431 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:37.431 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.431 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 350837 00:22:37.431 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:37.431 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:37.431 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 350837' 00:22:37.431 killing process with pid 350837 00:22:37.431 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 350837 00:22:37.431 Received shutdown signal, test time was about 10.000000 seconds 00:22:37.431 00:22:37.431 Latency(us) 00:22:37.431 [2024-12-13T04:38:37.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.431 [2024-12-13T04:38:37.446Z] =================================================================================================================== 00:22:37.431 [2024-12-13T04:38:37.446Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:37.431 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 350837 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sOkPLUf3g9 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sOkPLUf3g9 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sOkPLUf3g9 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sOkPLUf3g9 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=350979 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 350979 /var/tmp/bdevperf.sock 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 350979 ']' 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:37.690 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.690 [2024-12-13 05:38:37.624888] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:37.690 [2024-12-13 05:38:37.624936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350979 ] 00:22:37.690 [2024-12-13 05:38:37.693456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.949 [2024-12-13 05:38:37.713844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.949 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.949 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:37.949 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sOkPLUf3g9 00:22:38.209 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:38.209 [2024-12-13 05:38:38.168284] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:38.209 [2024-12-13 05:38:38.177331] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:38.209 [2024-12-13 05:38:38.177351] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:38.209 [2024-12-13 05:38:38.177373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:38.209 [2024-12-13 05:38:38.177560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b7340 (107): Transport endpoint is not connected 00:22:38.209 [2024-12-13 05:38:38.178554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b7340 (9): Bad file descriptor 00:22:38.209 [2024-12-13 05:38:38.179556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:38.209 [2024-12-13 05:38:38.179569] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:38.209 [2024-12-13 05:38:38.179576] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:38.209 [2024-12-13 05:38:38.179583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:38.209 request: 00:22:38.209 { 00:22:38.209 "name": "TLSTEST", 00:22:38.209 "trtype": "tcp", 00:22:38.209 "traddr": "10.0.0.2", 00:22:38.209 "adrfam": "ipv4", 00:22:38.209 "trsvcid": "4420", 00:22:38.209 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:38.209 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.209 "prchk_reftag": false, 00:22:38.209 "prchk_guard": false, 00:22:38.209 "hdgst": false, 00:22:38.209 "ddgst": false, 00:22:38.209 "psk": "key0", 00:22:38.209 "allow_unrecognized_csi": false, 00:22:38.209 "method": "bdev_nvme_attach_controller", 00:22:38.209 "req_id": 1 00:22:38.209 } 00:22:38.209 Got JSON-RPC error response 00:22:38.209 response: 00:22:38.209 { 00:22:38.209 "code": -5, 00:22:38.209 "message": "Input/output error" 00:22:38.209 } 00:22:38.209 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 350979 00:22:38.209 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 350979 ']' 00:22:38.209 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 350979 00:22:38.209 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:38.209 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:38.209 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 350979 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 350979' 00:22:38.468 killing process with pid 350979 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 350979 00:22:38.468 Received shutdown signal, test time was about 10.000000 seconds 00:22:38.468 00:22:38.468 Latency(us) 00:22:38.468 [2024-12-13T04:38:38.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.468 [2024-12-13T04:38:38.483Z] =================================================================================================================== 00:22:38.468 [2024-12-13T04:38:38.483Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 350979 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.468 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351084 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351084 /var/tmp/bdevperf.sock 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351084 ']' 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.469 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.469 [2024-12-13 05:38:38.454630] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:38.469 [2024-12-13 05:38:38.454679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351084 ] 00:22:38.728 [2024-12-13 05:38:38.521662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.728 [2024-12-13 05:38:38.541311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.728 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.728 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:38.728 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:38.987 [2024-12-13 05:38:38.807277] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:38.987 [2024-12-13 05:38:38.807310] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:38.987 request: 00:22:38.987 { 00:22:38.987 "name": "key0", 00:22:38.987 "path": "", 00:22:38.987 "method": "keyring_file_add_key", 00:22:38.987 "req_id": 1 00:22:38.987 } 00:22:38.987 Got JSON-RPC error response 00:22:38.987 response: 00:22:38.987 { 00:22:38.987 "code": -1, 00:22:38.987 "message": "Operation not permitted" 00:22:38.987 } 00:22:38.987 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:39.246 [2024-12-13 05:38:39.015900] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.246 [2024-12-13 05:38:39.015935] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:39.246 request: 00:22:39.246 { 00:22:39.246 "name": "TLSTEST", 00:22:39.246 "trtype": "tcp", 00:22:39.246 "traddr": "10.0.0.2", 00:22:39.246 "adrfam": "ipv4", 00:22:39.246 "trsvcid": "4420", 00:22:39.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.246 "prchk_reftag": false, 00:22:39.246 "prchk_guard": false, 00:22:39.246 "hdgst": false, 00:22:39.246 "ddgst": false, 00:22:39.246 "psk": "key0", 00:22:39.246 "allow_unrecognized_csi": false, 00:22:39.246 "method": "bdev_nvme_attach_controller", 00:22:39.246 "req_id": 1 00:22:39.246 } 00:22:39.246 Got JSON-RPC error response 00:22:39.246 response: 00:22:39.246 { 00:22:39.246 "code": -126, 00:22:39.246 "message": "Required key not available" 00:22:39.246 } 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351084 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351084 ']' 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351084 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351084 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351084' 00:22:39.246 killing process with pid 351084 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351084 00:22:39.246 Received shutdown signal, test time was about 10.000000 seconds 00:22:39.246 00:22:39.246 Latency(us) 00:22:39.246 [2024-12-13T04:38:39.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.246 [2024-12-13T04:38:39.261Z] =================================================================================================================== 00:22:39.246 [2024-12-13T04:38:39.261Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351084 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 346503 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 346503 ']' 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 346503 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.246 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346503 00:22:39.505 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346503' 00:22:39.506 killing process with pid 346503 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 346503 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 346503 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.CSwafsDekC 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.CSwafsDekC 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=351319 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 351319 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351319 ']' 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.506 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.765 [2024-12-13 05:38:39.560536] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:39.765 [2024-12-13 05:38:39.560583] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.765 [2024-12-13 05:38:39.636029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.765 [2024-12-13 05:38:39.654452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.765 [2024-12-13 05:38:39.654487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.765 [2024-12-13 05:38:39.654494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.765 [2024-12-13 05:38:39.654499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.765 [2024-12-13 05:38:39.654505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.765 [2024-12-13 05:38:39.654976] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.765 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.765 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:39.765 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.765 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.765 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.765 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.765 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.CSwafsDekC 00:22:39.765 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CSwafsDekC 00:22:39.765 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:40.024 [2024-12-13 05:38:39.949134] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.024 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:40.282 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:40.541 [2024-12-13 05:38:40.350167] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:40.541 [2024-12-13 05:38:40.350360] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.541 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:40.541 malloc0 00:22:40.800 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:40.800 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CSwafsDekC 00:22:41.059 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CSwafsDekC 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CSwafsDekC 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351570 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351570 /var/tmp/bdevperf.sock 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351570 ']' 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.318 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.318 [2024-12-13 05:38:41.196458] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:41.318 [2024-12-13 05:38:41.196506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351570 ] 00:22:41.318 [2024-12-13 05:38:41.270733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.318 [2024-12-13 05:38:41.292460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.577 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.577 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:41.577 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CSwafsDekC 00:22:41.577 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:41.836 [2024-12-13 05:38:41.763685] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:41.836 TLSTESTn1 00:22:42.095 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:42.095 Running I/O for 10 seconds... 00:22:43.967 4975.00 IOPS, 19.43 MiB/s [2024-12-13T04:38:45.360Z] 5257.50 IOPS, 20.54 MiB/s [2024-12-13T04:38:46.296Z] 5281.33 IOPS, 20.63 MiB/s [2024-12-13T04:38:47.233Z] 5208.50 IOPS, 20.35 MiB/s [2024-12-13T04:38:48.170Z] 5144.80 IOPS, 20.10 MiB/s [2024-12-13T04:38:49.107Z] 5209.00 IOPS, 20.35 MiB/s [2024-12-13T04:38:50.043Z] 5207.86 IOPS, 20.34 MiB/s [2024-12-13T04:38:50.979Z] 5210.25 IOPS, 20.35 MiB/s [2024-12-13T04:38:52.357Z] 5249.33 IOPS, 20.51 MiB/s [2024-12-13T04:38:52.357Z] 5274.80 IOPS, 20.60 MiB/s 00:22:52.342 Latency(us) 00:22:52.342 [2024-12-13T04:38:52.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.342 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:52.342 Verification LBA range: start 0x0 length 0x2000 00:22:52.342 TLSTESTn1 : 10.01 5280.46 20.63 0.00 0.00 24205.40 5492.54 45687.95 00:22:52.342 [2024-12-13T04:38:52.357Z] =================================================================================================================== 00:22:52.342 [2024-12-13T04:38:52.357Z] Total : 5280.46 20.63 0.00 0.00 24205.40 5492.54 45687.95 00:22:52.342 { 00:22:52.342 "results": [ 00:22:52.342 { 00:22:52.342 "job": "TLSTESTn1", 00:22:52.342 "core_mask": "0x4", 00:22:52.342 "workload": "verify", 00:22:52.342 "status": "finished", 00:22:52.342 "verify_range": { 00:22:52.342 "start": 0, 00:22:52.342 "length": 8192 00:22:52.342 }, 00:22:52.342 "queue_depth": 128, 00:22:52.342 "io_size": 4096, 00:22:52.342 "runtime": 10.013341, 00:22:52.342 "iops": 5280.455344524869, 00:22:52.342 "mibps": 20.62677868955027, 00:22:52.342 "io_failed": 0, 00:22:52.342 "io_timeout": 0, 00:22:52.342 "avg_latency_us": 24205.39985903861, 00:22:52.342 "min_latency_us": 5492.540952380952, 00:22:52.342 "max_latency_us": 45687.95428571429 00:22:52.342 } 00:22:52.342 ], 00:22:52.342 "core_count": 1 00:22:52.342 } 00:22:52.342 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.342 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 351570 00:22:52.342 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351570 ']' 00:22:52.342 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351570 00:22:52.342 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:52.342 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.342 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351570 00:22:52.342 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:52.342 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:52.342 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351570' 00:22:52.342 killing process with pid 351570 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351570 00:22:52.343 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.343 00:22:52.343 Latency(us) 00:22:52.343 [2024-12-13T04:38:52.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.343 [2024-12-13T04:38:52.358Z] =================================================================================================================== 00:22:52.343 [2024-12-13T04:38:52.358Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351570 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.CSwafsDekC 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CSwafsDekC 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CSwafsDekC 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CSwafsDekC 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CSwafsDekC 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=353353 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 353353 /var/tmp/bdevperf.sock 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353353 ']' 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.343 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.343 [2024-12-13 05:38:52.256703] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:52.343 [2024-12-13 05:38:52.256750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353353 ] 00:22:52.343 [2024-12-13 05:38:52.315520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.343 [2024-12-13 05:38:52.334669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.602 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.602 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:52.602 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CSwafsDekC 00:22:52.602 [2024-12-13 05:38:52.592661] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CSwafsDekC': 0100666 00:22:52.602 [2024-12-13 05:38:52.592695] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:52.602 request: 00:22:52.602 { 00:22:52.602 "name": "key0", 00:22:52.602 "path": "/tmp/tmp.CSwafsDekC", 00:22:52.602 "method": "keyring_file_add_key", 00:22:52.602 "req_id": 1 00:22:52.602 } 00:22:52.602 Got JSON-RPC error response 00:22:52.602 response: 00:22:52.602 { 00:22:52.602 "code": -1, 00:22:52.602 "message": "Operation not permitted" 00:22:52.602 } 00:22:52.861 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:52.861 [2024-12-13 05:38:52.789251] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.861 [2024-12-13 05:38:52.789290] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:52.861 request: 00:22:52.861 { 00:22:52.861 "name": "TLSTEST", 00:22:52.861 "trtype": "tcp", 00:22:52.861 "traddr": "10.0.0.2", 00:22:52.861 "adrfam": "ipv4", 00:22:52.861 "trsvcid": "4420", 00:22:52.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.861 "prchk_reftag": false, 00:22:52.861 "prchk_guard": false, 00:22:52.861 "hdgst": false, 00:22:52.861 "ddgst": false, 00:22:52.861 "psk": "key0", 00:22:52.861 "allow_unrecognized_csi": false, 00:22:52.861 "method": "bdev_nvme_attach_controller", 00:22:52.861 "req_id": 1 00:22:52.861 } 00:22:52.861 Got JSON-RPC error response 00:22:52.861 response: 00:22:52.861 { 00:22:52.861 "code": -126, 00:22:52.861 "message": "Required key not available" 00:22:52.861 } 00:22:52.861 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 353353 00:22:52.861 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353353 ']' 00:22:52.861 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353353 00:22:52.861 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:52.861 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.861 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353353 00:22:52.861 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:52.861 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:52.861 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353353' 00:22:52.861 killing process with pid 353353 00:22:52.861 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353353 00:22:52.861 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.861 00:22:52.861 Latency(us) 00:22:52.861 [2024-12-13T04:38:52.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.861 [2024-12-13T04:38:52.876Z] =================================================================================================================== 00:22:52.861 [2024-12-13T04:38:52.876Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:52.861 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353353 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 351319 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351319 ']' 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351319 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351319 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351319' 00:22:53.121 killing process with pid 351319 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351319 00:22:53.121 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351319 00:22:53.380 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:53.380 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:53.380 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.380 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.380 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=353587 00:22:53.380 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:53.380 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 353587 00:22:53.380 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353587 ']' 00:22:53.380 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.380 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.380 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.380 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.380 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.380 [2024-12-13 05:38:53.293210] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:53.380 [2024-12-13 05:38:53.293257] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.380 [2024-12-13 05:38:53.367237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.380 [2024-12-13 05:38:53.384510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.380 [2024-12-13 05:38:53.384545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.380 [2024-12-13 05:38:53.384552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.380 [2024-12-13 05:38:53.384558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.380 [2024-12-13 05:38:53.384563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.380 [2024-12-13 05:38:53.385022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.639 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.639 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:53.639 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.639 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.639 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.639 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.639 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.CSwafsDekC 00:22:53.639 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:53.640 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.CSwafsDekC 00:22:53.640 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:22:53.640 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.640 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:22:53.640 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.640 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.CSwafsDekC 00:22:53.640 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CSwafsDekC 00:22:53.640 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:53.900 [2024-12-13 05:38:53.683232] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.900 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:53.900 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:54.159 [2024-12-13 05:38:54.076242] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:54.159 [2024-12-13 05:38:54.076425] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.159 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:54.418 malloc0 00:22:54.418 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:54.676 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CSwafsDekC 00:22:54.677 [2024-12-13 05:38:54.685656] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CSwafsDekC': 0100666 00:22:54.677 [2024-12-13 05:38:54.685681] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:54.677 request: 00:22:54.677 { 00:22:54.677 "name": "key0", 00:22:54.677 "path": "/tmp/tmp.CSwafsDekC", 00:22:54.677 "method": "keyring_file_add_key", 00:22:54.677 "req_id": 1 00:22:54.677 } 00:22:54.677 Got JSON-RPC error response 00:22:54.677 response: 00:22:54.677 { 00:22:54.677 "code": -1, 00:22:54.677 "message": "Operation not permitted" 00:22:54.677 } 00:22:54.935 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:54.935 [2024-12-13 05:38:54.874160] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:54.935 [2024-12-13 05:38:54.874189] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:54.935 request: 00:22:54.935 { 00:22:54.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.935 "host": "nqn.2016-06.io.spdk:host1", 00:22:54.935 "psk": "key0", 00:22:54.935 "method": "nvmf_subsystem_add_host", 00:22:54.935 "req_id": 1 00:22:54.935 } 00:22:54.935 Got JSON-RPC error response 00:22:54.935 response: 00:22:54.935 { 00:22:54.935 "code": -32603, 00:22:54.935 "message": "Internal error" 00:22:54.935 } 00:22:54.935 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:54.936 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:54.936 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:54.936 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:54.936 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 353587 00:22:54.936 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353587 ']' 00:22:54.936 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353587 00:22:54.936 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:54.936 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.936 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353587 00:22:55.200 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:55.200 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:55.200 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353587' 00:22:55.200 killing process with pid 353587 00:22:55.200 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353587 00:22:55.200 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353587 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.CSwafsDekC 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=353849 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 353849 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353849 ']' 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.200 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.200 [2024-12-13 05:38:55.183362] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:55.200 [2024-12-13 05:38:55.183407] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.460 [2024-12-13 05:38:55.261282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.460 [2024-12-13 05:38:55.278811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.460 [2024-12-13 05:38:55.278846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.460 [2024-12-13 05:38:55.278852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.460 [2024-12-13 05:38:55.278858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.460 [2024-12-13 05:38:55.278863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.460 [2024-12-13 05:38:55.279333] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.460 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.460 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:55.460 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.460 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.460 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.460 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.460 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.CSwafsDekC 00:22:55.460 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CSwafsDekC 00:22:55.460 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.719 [2024-12-13 05:38:55.573625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.719 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:55.978 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:55.978 [2024-12-13 05:38:55.982670] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.978 [2024-12-13 05:38:55.982849] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.235 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:56.235 malloc0 00:22:56.235 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:56.493 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CSwafsDekC 00:22:56.752 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:57.011 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:57.011 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=354097 00:22:57.011 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.011 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 354097 /var/tmp/bdevperf.sock 00:22:57.011 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354097 ']' 00:22:57.011 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.011 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.011 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.011 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.011 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.011 [2024-12-13 05:38:56.798054] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:57.011 [2024-12-13 05:38:56.798101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354097 ] 00:22:57.011 [2024-12-13 05:38:56.871322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.011 [2024-12-13 05:38:56.893622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.011 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.011 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:57.011 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CSwafsDekC 00:22:57.269 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:57.527 [2024-12-13 05:38:57.369535] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.528 TLSTESTn1 00:22:57.528 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:57.787 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:57.787 "subsystems": [ 00:22:57.787 { 00:22:57.787 "subsystem": "keyring", 00:22:57.787 "config": [ 00:22:57.787 { 00:22:57.787 "method": "keyring_file_add_key", 00:22:57.787 "params": { 00:22:57.787 "name": "key0", 00:22:57.787 "path": "/tmp/tmp.CSwafsDekC" 00:22:57.787 } 00:22:57.787 } 00:22:57.787 ] 00:22:57.787 }, 00:22:57.787 { 00:22:57.787 "subsystem": "iobuf", 00:22:57.787 "config": [ 00:22:57.787 { 00:22:57.787 "method": "iobuf_set_options", 00:22:57.787 "params": { 00:22:57.787 "small_pool_count": 8192, 00:22:57.787 "large_pool_count": 1024, 00:22:57.787 "small_bufsize": 8192, 00:22:57.787 "large_bufsize": 135168, 00:22:57.787 "enable_numa": false 00:22:57.787 } 00:22:57.787 } 00:22:57.787 ] 00:22:57.787 }, 00:22:57.787 { 00:22:57.787 "subsystem": "sock", 00:22:57.787 "config": [ 00:22:57.787 { 00:22:57.787 "method": "sock_set_default_impl", 00:22:57.787 "params": { 00:22:57.787 "impl_name": "posix" 00:22:57.787 } 00:22:57.787 }, 00:22:57.787 { 00:22:57.787 "method": "sock_impl_set_options", 00:22:57.787 "params": { 00:22:57.787 "impl_name": "ssl", 00:22:57.787 "recv_buf_size": 4096, 00:22:57.787 "send_buf_size": 4096, 00:22:57.787 "enable_recv_pipe": true, 00:22:57.787 "enable_quickack": false, 00:22:57.787 "enable_placement_id": 0, 00:22:57.787 "enable_zerocopy_send_server": true, 00:22:57.787 "enable_zerocopy_send_client": false, 00:22:57.787 "zerocopy_threshold": 0, 00:22:57.787 "tls_version": 0, 00:22:57.787 "enable_ktls": false 00:22:57.787 } 00:22:57.787 }, 00:22:57.787 { 00:22:57.787 "method": "sock_impl_set_options", 00:22:57.787 "params": { 00:22:57.787 "impl_name": "posix", 00:22:57.787 "recv_buf_size": 2097152, 00:22:57.787 "send_buf_size": 2097152, 00:22:57.787 "enable_recv_pipe": true, 00:22:57.787 "enable_quickack": false, 00:22:57.787 "enable_placement_id": 0, 00:22:57.787 "enable_zerocopy_send_server": true, 00:22:57.787 "enable_zerocopy_send_client": false, 00:22:57.787 "zerocopy_threshold": 0, 00:22:57.787 "tls_version": 0, 00:22:57.787 "enable_ktls": false 00:22:57.787 } 00:22:57.787 } 00:22:57.787 ] 00:22:57.787 }, 00:22:57.787 { 00:22:57.787 "subsystem": "vmd", 00:22:57.787 "config": [] 00:22:57.787 }, 00:22:57.787 { 00:22:57.787 "subsystem": "accel", 00:22:57.787 "config": [ 00:22:57.787 { 00:22:57.787 "method": "accel_set_options", 00:22:57.787 "params": { 00:22:57.787 "small_cache_size": 128, 00:22:57.787 "large_cache_size": 16, 00:22:57.787 "task_count": 2048, 00:22:57.787 "sequence_count": 2048, 00:22:57.787 "buf_count": 2048 00:22:57.787 } 00:22:57.787 } 00:22:57.787 ] 00:22:57.787 }, 00:22:57.787 { 00:22:57.787 "subsystem": "bdev", 00:22:57.787 "config": [ 00:22:57.787 { 00:22:57.787 "method": "bdev_set_options", 00:22:57.787 "params": { 00:22:57.787 "bdev_io_pool_size": 65535, 00:22:57.787 "bdev_io_cache_size": 256, 00:22:57.787 "bdev_auto_examine": true, 00:22:57.787 "iobuf_small_cache_size": 128, 00:22:57.787 "iobuf_large_cache_size": 16 00:22:57.787 } 00:22:57.787 }, 00:22:57.787 { 00:22:57.787 "method": "bdev_raid_set_options", 00:22:57.787 "params": { 00:22:57.787 "process_window_size_kb": 1024, 00:22:57.787 "process_max_bandwidth_mb_sec": 0 00:22:57.787 } 00:22:57.787 }, 00:22:57.787 { 00:22:57.787 "method": "bdev_iscsi_set_options", 00:22:57.787 "params": { 00:22:57.787 "timeout_sec": 30 00:22:57.787 } 00:22:57.787 }, 00:22:57.787 { 00:22:57.787 "method": "bdev_nvme_set_options", 00:22:57.787 "params": { 00:22:57.787 "action_on_timeout": "none", 00:22:57.787 "timeout_us": 0, 00:22:57.787 "timeout_admin_us": 0, 00:22:57.787 "keep_alive_timeout_ms": 10000, 00:22:57.787 "arbitration_burst": 0, 00:22:57.787 "low_priority_weight": 0, 00:22:57.787 "medium_priority_weight": 0, 00:22:57.787 "high_priority_weight": 0, 00:22:57.787 "nvme_adminq_poll_period_us": 10000, 00:22:57.787 "nvme_ioq_poll_period_us": 0, 00:22:57.787 "io_queue_requests": 0, 00:22:57.787 "delay_cmd_submit": true, 00:22:57.787 "transport_retry_count": 4, 00:22:57.787 "bdev_retry_count": 3, 00:22:57.787 "transport_ack_timeout": 0, 00:22:57.787 "ctrlr_loss_timeout_sec": 0, 00:22:57.787 "reconnect_delay_sec": 0, 00:22:57.787 "fast_io_fail_timeout_sec": 0, 00:22:57.787 "disable_auto_failback": false, 00:22:57.787 "generate_uuids": false, 00:22:57.787 "transport_tos": 0, 00:22:57.787 "nvme_error_stat": false, 00:22:57.787 "rdma_srq_size": 0, 00:22:57.787 "io_path_stat": false, 00:22:57.787 "allow_accel_sequence": false, 00:22:57.787 "rdma_max_cq_size": 0, 00:22:57.787 "rdma_cm_event_timeout_ms": 0, 00:22:57.787 "dhchap_digests": [ 00:22:57.787 "sha256", 00:22:57.787 "sha384", 00:22:57.787 "sha512" 00:22:57.787 ], 00:22:57.787 "dhchap_dhgroups": [ 00:22:57.787 "null", 00:22:57.787 "ffdhe2048", 00:22:57.787 "ffdhe3072", 00:22:57.787 "ffdhe4096", 00:22:57.787 "ffdhe6144", 00:22:57.787 "ffdhe8192" 00:22:57.787 ], 00:22:57.787 "rdma_umr_per_io": false 00:22:57.787 } 00:22:57.787 }, 00:22:57.787 { 00:22:57.787 "method": "bdev_nvme_set_hotplug", 00:22:57.787 "params": { 00:22:57.787 "period_us": 100000, 00:22:57.787 "enable": false 00:22:57.787 } 00:22:57.787 }, 00:22:57.787 { 00:22:57.787 "method": "bdev_malloc_create", 00:22:57.787 "params": { 00:22:57.787 "name": "malloc0", 00:22:57.787 "num_blocks": 8192, 00:22:57.787 "block_size": 4096, 00:22:57.787 "physical_block_size": 4096, 00:22:57.788 "uuid": "24dd3f9d-9f73-44d2-9ce2-08491167a99c", 00:22:57.788 "optimal_io_boundary": 0, 00:22:57.788 "md_size": 0, 00:22:57.788 "dif_type": 0, 00:22:57.788 "dif_is_head_of_md": false, 00:22:57.788 "dif_pi_format": 0 00:22:57.788 } 00:22:57.788 }, 00:22:57.788 { 00:22:57.788 "method": "bdev_wait_for_examine" 00:22:57.788 } 00:22:57.788 ] 00:22:57.788 }, 00:22:57.788 { 00:22:57.788 "subsystem": "nbd", 00:22:57.788 "config": [] 00:22:57.788 }, 00:22:57.788 { 00:22:57.788 "subsystem": "scheduler", 00:22:57.788 "config": [ 00:22:57.788 { 00:22:57.788 "method": "framework_set_scheduler", 00:22:57.788 "params": { 00:22:57.788 "name": "static" 00:22:57.788 } 00:22:57.788 } 00:22:57.788 ] 00:22:57.788 }, 00:22:57.788 { 00:22:57.788 "subsystem": "nvmf", 00:22:57.788 "config": [ 00:22:57.788 { 00:22:57.788 "method": "nvmf_set_config", 00:22:57.788 "params": { 00:22:57.788 "discovery_filter": "match_any", 00:22:57.788 "admin_cmd_passthru": { 00:22:57.788 "identify_ctrlr": false 00:22:57.788 }, 00:22:57.788 "dhchap_digests": [ 00:22:57.788 "sha256", 00:22:57.788 "sha384", 00:22:57.788 "sha512" 00:22:57.788 ], 00:22:57.788 "dhchap_dhgroups": [ 00:22:57.788 "null", 00:22:57.788 "ffdhe2048", 00:22:57.788 "ffdhe3072", 00:22:57.788 "ffdhe4096", 00:22:57.788 "ffdhe6144", 00:22:57.788 "ffdhe8192" 00:22:57.788 ] 00:22:57.788 } 00:22:57.788 }, 00:22:57.788 { 00:22:57.788 "method": "nvmf_set_max_subsystems", 00:22:57.788 "params": { 00:22:57.788 "max_subsystems": 1024 00:22:57.788 } 00:22:57.788 }, 00:22:57.788 { 00:22:57.788 "method": "nvmf_set_crdt", 00:22:57.788 "params": { 00:22:57.788 "crdt1": 0, 00:22:57.788 "crdt2": 0, 00:22:57.788 "crdt3": 0 00:22:57.788 } 00:22:57.788 }, 00:22:57.788 { 00:22:57.788 "method": "nvmf_create_transport", 00:22:57.788 "params": { 00:22:57.788 "trtype": "TCP", 00:22:57.788 "max_queue_depth": 128, 00:22:57.788 "max_io_qpairs_per_ctrlr": 127, 00:22:57.788 "in_capsule_data_size": 4096, 00:22:57.788 "max_io_size": 131072, 00:22:57.788 "io_unit_size": 131072, 00:22:57.788 "max_aq_depth": 128, 00:22:57.788 "num_shared_buffers": 511, 00:22:57.788 "buf_cache_size": 4294967295, 00:22:57.788 "dif_insert_or_strip": false, 00:22:57.788 "zcopy": false, 00:22:57.788 "c2h_success": false, 00:22:57.788 "sock_priority": 0, 00:22:57.788 "abort_timeout_sec": 1, 00:22:57.788 "ack_timeout": 0, 00:22:57.788 "data_wr_pool_size": 0 00:22:57.788 } 00:22:57.788 }, 00:22:57.788 { 00:22:57.788 "method": "nvmf_create_subsystem", 00:22:57.788 "params": { 00:22:57.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.788 "allow_any_host": false, 00:22:57.788 "serial_number": "SPDK00000000000001", 00:22:57.788 "model_number": "SPDK bdev Controller", 00:22:57.788 "max_namespaces": 10, 00:22:57.788 "min_cntlid": 1, 00:22:57.788 "max_cntlid": 65519, 00:22:57.788 "ana_reporting": false 00:22:57.788 } 00:22:57.788 }, 00:22:57.788 { 00:22:57.788 "method": "nvmf_subsystem_add_host", 00:22:57.788 "params": { 00:22:57.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.788 "host": "nqn.2016-06.io.spdk:host1", 00:22:57.788 "psk": "key0" 00:22:57.788 } 00:22:57.788 }, 00:22:57.788 { 00:22:57.788 "method": "nvmf_subsystem_add_ns", 00:22:57.788 "params": { 00:22:57.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.788 "namespace": { 00:22:57.788 "nsid": 1, 00:22:57.788 "bdev_name": "malloc0", 00:22:57.788 "nguid": "24DD3F9D9F7344D29CE208491167A99C", 00:22:57.788 "uuid": "24dd3f9d-9f73-44d2-9ce2-08491167a99c", 00:22:57.788 "no_auto_visible": false 00:22:57.788 } 00:22:57.788 } 00:22:57.788 }, 00:22:57.788 { 00:22:57.788 "method": "nvmf_subsystem_add_listener", 00:22:57.788 "params": { 00:22:57.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.788 "listen_address": { 00:22:57.788 "trtype": "TCP", 00:22:57.788 "adrfam": "IPv4", 00:22:57.788 "traddr": "10.0.0.2", 00:22:57.788 "trsvcid": "4420" 00:22:57.788 }, 00:22:57.788 "secure_channel": true 00:22:57.788 } 00:22:57.788 } 00:22:57.788 ] 00:22:57.788 } 00:22:57.788 ] 00:22:57.788 }' 00:22:57.788 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:58.048 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:58.048 "subsystems": [ 00:22:58.048 { 00:22:58.048 "subsystem": "keyring", 00:22:58.048 "config": [ 00:22:58.048 { 00:22:58.048 "method": "keyring_file_add_key", 00:22:58.048 "params": { 00:22:58.048 "name": "key0", 00:22:58.048 "path": "/tmp/tmp.CSwafsDekC" 00:22:58.048 } 00:22:58.048 } 00:22:58.048 ] 00:22:58.048 }, 00:22:58.048 { 00:22:58.048 "subsystem": "iobuf", 00:22:58.048 "config": [ 00:22:58.048 { 00:22:58.048 "method": "iobuf_set_options", 00:22:58.048 "params": { 00:22:58.048 "small_pool_count": 8192, 00:22:58.048 "large_pool_count": 1024, 00:22:58.048 "small_bufsize": 8192, 00:22:58.048 "large_bufsize": 135168, 00:22:58.048 "enable_numa": false 00:22:58.048 } 00:22:58.048 } 00:22:58.048 ] 00:22:58.048 }, 00:22:58.048 { 00:22:58.048 "subsystem": "sock", 00:22:58.048 "config": [ 00:22:58.048 { 00:22:58.048 "method": "sock_set_default_impl", 00:22:58.048 "params": { 00:22:58.048 "impl_name": "posix" 00:22:58.048 } 00:22:58.048 }, 00:22:58.048 { 00:22:58.048 "method": "sock_impl_set_options", 00:22:58.048 "params": { 00:22:58.048 "impl_name": "ssl", 00:22:58.048 "recv_buf_size": 4096, 00:22:58.048 "send_buf_size": 4096, 00:22:58.048 "enable_recv_pipe": true, 00:22:58.048 "enable_quickack": false, 00:22:58.048 "enable_placement_id": 0, 00:22:58.048 "enable_zerocopy_send_server": true, 00:22:58.048 "enable_zerocopy_send_client": false, 00:22:58.048 "zerocopy_threshold": 0, 00:22:58.048 "tls_version": 0, 00:22:58.048 "enable_ktls": false 00:22:58.048 } 00:22:58.048 }, 00:22:58.048 { 00:22:58.048 "method": "sock_impl_set_options", 00:22:58.048 "params": { 00:22:58.048 "impl_name": "posix", 00:22:58.048 "recv_buf_size": 2097152, 00:22:58.048 "send_buf_size": 2097152, 00:22:58.048 "enable_recv_pipe": true, 00:22:58.048 "enable_quickack": false, 00:22:58.048 "enable_placement_id": 0, 00:22:58.048 "enable_zerocopy_send_server": true, 00:22:58.048 "enable_zerocopy_send_client": false, 00:22:58.048 "zerocopy_threshold": 0, 00:22:58.048 "tls_version": 0, 00:22:58.048 "enable_ktls": false 00:22:58.048 } 00:22:58.048 } 00:22:58.048 ] 00:22:58.048 }, 00:22:58.048 { 00:22:58.048 "subsystem": "vmd", 00:22:58.048 "config": [] 00:22:58.048 }, 00:22:58.048 { 00:22:58.048 "subsystem": "accel", 00:22:58.048 "config": [ 00:22:58.048 { 00:22:58.048 "method": "accel_set_options", 00:22:58.048 "params": { 00:22:58.048 "small_cache_size": 128, 00:22:58.048 "large_cache_size": 16, 00:22:58.048 "task_count": 2048, 00:22:58.048 "sequence_count": 2048, 00:22:58.048 "buf_count": 2048 00:22:58.048 } 00:22:58.048 } 00:22:58.048 ] 00:22:58.048 }, 00:22:58.048 { 00:22:58.048 "subsystem": "bdev", 00:22:58.048 "config": [ 00:22:58.048 { 00:22:58.048 "method": "bdev_set_options", 00:22:58.048 "params": { 00:22:58.048 "bdev_io_pool_size": 65535, 00:22:58.048 "bdev_io_cache_size": 256, 00:22:58.048 "bdev_auto_examine": true, 00:22:58.048 "iobuf_small_cache_size": 128, 00:22:58.048 "iobuf_large_cache_size": 16 00:22:58.048 } 00:22:58.048 }, 00:22:58.048 { 00:22:58.048 "method": "bdev_raid_set_options", 00:22:58.048 "params": { 00:22:58.048 "process_window_size_kb": 1024, 00:22:58.048 "process_max_bandwidth_mb_sec": 0 00:22:58.048 } 00:22:58.048 }, 00:22:58.048 { 00:22:58.048 "method": "bdev_iscsi_set_options", 00:22:58.048 "params": { 00:22:58.048 "timeout_sec": 30 00:22:58.048 } 00:22:58.048 }, 00:22:58.048 { 00:22:58.048 "method": "bdev_nvme_set_options", 00:22:58.048 "params": { 00:22:58.048 "action_on_timeout": "none", 00:22:58.048 "timeout_us": 0, 00:22:58.048 "timeout_admin_us": 0, 00:22:58.048 "keep_alive_timeout_ms": 10000, 00:22:58.048 "arbitration_burst": 0, 00:22:58.048 "low_priority_weight": 0, 00:22:58.048 "medium_priority_weight": 0, 00:22:58.048 "high_priority_weight": 0, 00:22:58.048 "nvme_adminq_poll_period_us": 10000, 00:22:58.048 "nvme_ioq_poll_period_us": 0, 00:22:58.048 "io_queue_requests": 512, 00:22:58.048 "delay_cmd_submit": true, 00:22:58.048 "transport_retry_count": 4, 00:22:58.048 "bdev_retry_count": 3, 00:22:58.048 "transport_ack_timeout": 0, 00:22:58.048 "ctrlr_loss_timeout_sec": 0, 00:22:58.049 "reconnect_delay_sec": 0, 00:22:58.049 "fast_io_fail_timeout_sec": 0, 00:22:58.049 "disable_auto_failback": false, 00:22:58.049 "generate_uuids": false, 00:22:58.049 "transport_tos": 0, 00:22:58.049 "nvme_error_stat": false, 00:22:58.049 "rdma_srq_size": 0, 00:22:58.049 "io_path_stat": false, 00:22:58.049 "allow_accel_sequence": false, 00:22:58.049 "rdma_max_cq_size": 0, 00:22:58.049 "rdma_cm_event_timeout_ms": 0, 00:22:58.049 "dhchap_digests": [ 00:22:58.049 "sha256", 00:22:58.049 "sha384", 00:22:58.049 "sha512" 00:22:58.049 ], 00:22:58.049 "dhchap_dhgroups": [ 00:22:58.049 "null", 00:22:58.049 "ffdhe2048", 00:22:58.049 "ffdhe3072", 00:22:58.049 "ffdhe4096", 00:22:58.049 "ffdhe6144", 00:22:58.049 "ffdhe8192" 00:22:58.049 ], 00:22:58.049 "rdma_umr_per_io": false 00:22:58.049 } 00:22:58.049 }, 00:22:58.049 { 00:22:58.049 "method": "bdev_nvme_attach_controller", 00:22:58.049 "params": { 00:22:58.049 "name": "TLSTEST", 00:22:58.049 "trtype": "TCP", 00:22:58.049 "adrfam": "IPv4", 00:22:58.049 "traddr": "10.0.0.2", 00:22:58.049 "trsvcid": "4420", 00:22:58.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.049 "prchk_reftag": false, 00:22:58.049 "prchk_guard": false, 00:22:58.049 "ctrlr_loss_timeout_sec": 0, 00:22:58.049 "reconnect_delay_sec": 0, 00:22:58.049 "fast_io_fail_timeout_sec": 0, 00:22:58.049 "psk": "key0", 00:22:58.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.049 "hdgst": false, 00:22:58.049 "ddgst": false, 00:22:58.049 "multipath": "multipath" 00:22:58.049 } 00:22:58.049 }, 00:22:58.049 { 00:22:58.049 "method": "bdev_nvme_set_hotplug", 00:22:58.049 "params": { 00:22:58.049 "period_us": 100000, 00:22:58.049 "enable": false 00:22:58.049 } 00:22:58.049 }, 00:22:58.049 { 00:22:58.049 "method": "bdev_wait_for_examine" 00:22:58.049 } 00:22:58.049 ] 00:22:58.049 }, 00:22:58.049 { 00:22:58.049 "subsystem": "nbd", 00:22:58.049 "config": [] 00:22:58.049 } 00:22:58.049 ] 00:22:58.049 }' 00:22:58.049 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 354097 00:22:58.049 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354097 ']' 00:22:58.049 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354097 00:22:58.049 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:58.049 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.049 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354097 00:22:58.049 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:58.049 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:58.049 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354097' 00:22:58.049 killing process with pid 354097 00:22:58.049 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354097 00:22:58.049 Received shutdown signal, test time was about 10.000000 seconds 00:22:58.049 00:22:58.049 Latency(us) 00:22:58.049 [2024-12-13T04:38:58.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.049 [2024-12-13T04:38:58.064Z] =================================================================================================================== 00:22:58.049 [2024-12-13T04:38:58.064Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:58.049 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354097 00:22:58.308 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 353849 00:22:58.308 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353849 ']' 00:22:58.308 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353849 00:22:58.308 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:58.308 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.308 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353849 00:22:58.308 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:58.308 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:58.308 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353849' 00:22:58.308 killing process with pid 353849 00:22:58.308 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353849 00:22:58.308 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353849 00:22:58.568 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:58.568 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.568 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.568 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:58.568 "subsystems": [ 00:22:58.568 { 00:22:58.568 "subsystem": "keyring", 00:22:58.568 "config": [ 00:22:58.568 { 00:22:58.568 "method": "keyring_file_add_key", 00:22:58.568 "params": { 00:22:58.568 "name": "key0", 00:22:58.568 "path": "/tmp/tmp.CSwafsDekC" 00:22:58.568 } 00:22:58.568 } 00:22:58.568 ] 00:22:58.568 }, 00:22:58.568 { 00:22:58.568 "subsystem": "iobuf", 00:22:58.568 "config": [ 00:22:58.568 { 00:22:58.568 "method": "iobuf_set_options", 00:22:58.568 "params": { 00:22:58.568 "small_pool_count": 8192, 00:22:58.568 "large_pool_count": 1024, 00:22:58.568 "small_bufsize": 8192, 00:22:58.568 "large_bufsize": 135168, 00:22:58.568 "enable_numa": false 00:22:58.568 } 00:22:58.568 } 00:22:58.568 ] 00:22:58.568 }, 00:22:58.568 { 00:22:58.568 "subsystem": "sock", 00:22:58.568 "config": [ 00:22:58.568 { 00:22:58.568 "method": "sock_set_default_impl", 00:22:58.568 "params": { 00:22:58.568 "impl_name": "posix" 00:22:58.568 } 00:22:58.568 }, 00:22:58.568 { 00:22:58.568 "method": "sock_impl_set_options", 00:22:58.568 "params": { 00:22:58.568 "impl_name": "ssl", 00:22:58.568 "recv_buf_size": 4096, 00:22:58.568 "send_buf_size": 4096, 00:22:58.568 "enable_recv_pipe": true, 00:22:58.568 "enable_quickack": false, 00:22:58.568 "enable_placement_id": 0, 00:22:58.568 "enable_zerocopy_send_server": true, 00:22:58.568 "enable_zerocopy_send_client": false, 00:22:58.568 "zerocopy_threshold": 0, 00:22:58.568 "tls_version": 0, 00:22:58.568 "enable_ktls": false 00:22:58.568 } 00:22:58.568 }, 00:22:58.568 { 00:22:58.568 "method": "sock_impl_set_options", 00:22:58.568 "params": { 00:22:58.568 "impl_name": "posix", 00:22:58.568 "recv_buf_size": 2097152, 00:22:58.568 "send_buf_size": 2097152, 00:22:58.568 "enable_recv_pipe": true, 00:22:58.568 "enable_quickack": false, 00:22:58.568 "enable_placement_id": 0, 00:22:58.568 "enable_zerocopy_send_server": true, 00:22:58.568 "enable_zerocopy_send_client": false, 00:22:58.568 "zerocopy_threshold": 0, 00:22:58.568 "tls_version": 0, 00:22:58.568 "enable_ktls": false 00:22:58.568 } 00:22:58.568 } 00:22:58.568 ] 00:22:58.568 }, 00:22:58.568 { 00:22:58.568 "subsystem": "vmd", 00:22:58.568 "config": [] 00:22:58.568 }, 00:22:58.568 { 00:22:58.568 "subsystem": "accel", 00:22:58.568 "config": [ 00:22:58.568 { 00:22:58.568 "method": "accel_set_options", 00:22:58.568 "params": { 00:22:58.568 "small_cache_size": 128, 00:22:58.568 "large_cache_size": 16, 00:22:58.568 "task_count": 2048, 00:22:58.568 "sequence_count": 2048, 00:22:58.568 "buf_count": 2048 00:22:58.568 } 00:22:58.568 } 00:22:58.568 ] 00:22:58.568 }, 00:22:58.568 { 00:22:58.568 "subsystem": "bdev", 00:22:58.568 "config": [ 00:22:58.568 { 00:22:58.568 "method": "bdev_set_options", 00:22:58.568 "params": { 00:22:58.568 "bdev_io_pool_size": 65535, 00:22:58.568 "bdev_io_cache_size": 256, 00:22:58.568 "bdev_auto_examine": true, 00:22:58.568 "iobuf_small_cache_size": 128, 00:22:58.568 "iobuf_large_cache_size": 16 00:22:58.568 } 00:22:58.568 }, 00:22:58.568 { 00:22:58.568 "method": "bdev_raid_set_options", 00:22:58.568 "params": { 00:22:58.568 "process_window_size_kb": 1024, 00:22:58.568 "process_max_bandwidth_mb_sec": 0 00:22:58.568 } 00:22:58.568 }, 00:22:58.568 { 00:22:58.568 "method": "bdev_iscsi_set_options", 00:22:58.568 "params": { 00:22:58.568 "timeout_sec": 30 00:22:58.568 } 00:22:58.568 }, 00:22:58.568 { 00:22:58.568 "method": "bdev_nvme_set_options", 00:22:58.568 "params": { 00:22:58.568 "action_on_timeout": "none", 00:22:58.568 "timeout_us": 0, 00:22:58.568 "timeout_admin_us": 0, 00:22:58.568 "keep_alive_timeout_ms": 10000, 00:22:58.568 "arbitration_burst": 0, 00:22:58.569 "low_priority_weight": 0, 00:22:58.569 "medium_priority_weight": 0, 00:22:58.569 "high_priority_weight": 0, 00:22:58.569 "nvme_adminq_poll_period_us": 10000, 00:22:58.569 "nvme_ioq_poll_period_us": 0, 00:22:58.569 "io_queue_requests": 0, 00:22:58.569 "delay_cmd_submit": true, 00:22:58.569 "transport_retry_count": 4, 00:22:58.569 "bdev_retry_count": 3, 00:22:58.569 "transport_ack_timeout": 0, 00:22:58.569 "ctrlr_loss_timeout_sec": 0, 00:22:58.569 "reconnect_delay_sec": 0, 00:22:58.569 "fast_io_fail_timeout_sec": 0, 00:22:58.569 "disable_auto_failback": false, 00:22:58.569 "generate_uuids": false, 00:22:58.569 "transport_tos": 0, 00:22:58.569 "nvme_error_stat": false, 00:22:58.569 "rdma_srq_size": 0, 00:22:58.569 "io_path_stat": false, 00:22:58.569 "allow_accel_sequence": false, 00:22:58.569 "rdma_max_cq_size": 0, 00:22:58.569 "rdma_cm_event_timeout_ms": 0, 00:22:58.569 "dhchap_digests": [ 00:22:58.569 "sha256", 00:22:58.569 "sha384", 00:22:58.569 "sha512" 00:22:58.569 ], 00:22:58.569 "dhchap_dhgroups": [ 00:22:58.569 "null", 00:22:58.569 "ffdhe2048", 00:22:58.569 "ffdhe3072", 00:22:58.569 "ffdhe4096", 00:22:58.569 "ffdhe6144", 00:22:58.569 "ffdhe8192" 00:22:58.569 ], 00:22:58.569 "rdma_umr_per_io": false 00:22:58.569 } 00:22:58.569 }, 00:22:58.569 { 00:22:58.569 "method": "bdev_nvme_set_hotplug", 00:22:58.569 "params": { 00:22:58.569 "period_us": 100000, 00:22:58.569 "enable": false 00:22:58.569 } 00:22:58.569 }, 00:22:58.569 { 00:22:58.569 "method": "bdev_malloc_create", 00:22:58.569 "params": { 00:22:58.569 "name": "malloc0", 00:22:58.569 "num_blocks": 8192, 00:22:58.569 "block_size": 4096, 00:22:58.569 "physical_block_size": 4096, 00:22:58.569 "uuid": "24dd3f9d-9f73-44d2-9ce2-08491167a99c", 00:22:58.569 "optimal_io_boundary": 0, 00:22:58.569 "md_size": 0, 00:22:58.569 "dif_type": 0, 00:22:58.569 "dif_is_head_of_md": false, 00:22:58.569 "dif_pi_format": 0 00:22:58.569 } 00:22:58.569 }, 00:22:58.569 { 00:22:58.569 "method": "bdev_wait_for_examine" 00:22:58.569 } 00:22:58.569 ] 00:22:58.569 }, 00:22:58.569 { 00:22:58.569 "subsystem": "nbd", 00:22:58.569 "config": [] 00:22:58.569 }, 00:22:58.569 { 00:22:58.569 "subsystem": "scheduler", 00:22:58.569 "config": [ 00:22:58.569 { 00:22:58.569 "method": "framework_set_scheduler", 00:22:58.569 "params": { 00:22:58.569 "name": "static" 00:22:58.569 } 00:22:58.569 } 00:22:58.569 ] 00:22:58.569 }, 00:22:58.569 { 00:22:58.569 "subsystem": "nvmf", 00:22:58.569 "config": [ 00:22:58.569 { 00:22:58.569 "method": "nvmf_set_config", 00:22:58.569 "params": { 00:22:58.569 "discovery_filter": "match_any", 00:22:58.569 "admin_cmd_passthru": { 00:22:58.569 "identify_ctrlr": false 00:22:58.569 }, 00:22:58.569 "dhchap_digests": [ 00:22:58.569 "sha256", 00:22:58.569 "sha384", 00:22:58.569 "sha512" 00:22:58.569 ], 00:22:58.569 "dhchap_dhgroups": [ 00:22:58.569 "null", 00:22:58.569 "ffdhe2048", 00:22:58.569 "ffdhe3072", 00:22:58.569 "ffdhe4096", 00:22:58.569 "ffdhe6144", 00:22:58.569 "ffdhe8192" 00:22:58.569 ] 00:22:58.569 } 00:22:58.569 }, 00:22:58.569 { 00:22:58.569 "method": "nvmf_set_max_subsystems", 00:22:58.569 "params": { 00:22:58.569 "max_subsystems": 1024 00:22:58.569 } 00:22:58.569 }, 00:22:58.569 { 00:22:58.569 "method": "nvmf_set_crdt", 00:22:58.569 "params": { 00:22:58.569 "crdt1": 0, 00:22:58.569 "crdt2": 0, 00:22:58.569 "crdt3": 0 00:22:58.569 } 00:22:58.569 }, 00:22:58.569 { 00:22:58.569 "method": "nvmf_create_transport", 00:22:58.569 "params": { 00:22:58.569 "trtype": "TCP", 00:22:58.569 "max_queue_depth": 128, 00:22:58.569 "max_io_qpairs_per_ctrlr": 127, 00:22:58.569 "in_capsule_data_size": 4096, 00:22:58.569 "max_io_size": 131072, 00:22:58.569 "io_unit_size": 131072, 00:22:58.569 "max_aq_depth": 128, 00:22:58.569 "num_shared_buffers": 511, 00:22:58.569 "buf_cache_size": 4294967295, 00:22:58.569 "dif_insert_or_strip": false, 00:22:58.569 "zcopy": false, 00:22:58.569 "c2h_success": false, 00:22:58.569 "sock_priority": 0, 00:22:58.569 "abort_timeout_sec": 1, 00:22:58.569 "ack_timeout": 0, 00:22:58.569 "data_wr_pool_size": 0 00:22:58.569 } 00:22:58.569 }, 00:22:58.569 { 00:22:58.569 "method": "nvmf_create_subsystem", 00:22:58.569 "params": { 00:22:58.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.569 "allow_any_host": false, 00:22:58.569 "serial_number": "SPDK00000000000001", 00:22:58.569 "model_number": "SPDK bdev Controller", 00:22:58.569 "max_namespaces": 10, 00:22:58.569 "min_cntlid": 1, 00:22:58.569 "max_cntlid": 65519, 00:22:58.569 "ana_reporting": false 00:22:58.569 } 00:22:58.569 }, 00:22:58.569 { 00:22:58.569 "method": "nvmf_subsystem_add_host", 00:22:58.569 "params": { 00:22:58.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.569 "host": "nqn.2016-06.io.spdk:host1", 00:22:58.569 "psk": "key0" 00:22:58.569 } 00:22:58.569 }, 00:22:58.569 { 00:22:58.569 "method": "nvmf_subsystem_add_ns", 00:22:58.569 "params": { 00:22:58.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.569 "namespace": { 00:22:58.569 "nsid": 1, 00:22:58.569 "bdev_name": "malloc0", 00:22:58.569 "nguid": "24DD3F9D9F7344D29CE208491167A99C", 00:22:58.569 "uuid": "24dd3f9d-9f73-44d2-9ce2-08491167a99c", 00:22:58.569 "no_auto_visible": false 00:22:58.569 } 00:22:58.569 } 00:22:58.569 }, 00:22:58.569 { 00:22:58.569 "method": "nvmf_subsystem_add_listener", 00:22:58.569 "params": { 00:22:58.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.569 "listen_address": { 00:22:58.569 "trtype": "TCP", 00:22:58.569 "adrfam": "IPv4", 00:22:58.569 "traddr": "10.0.0.2", 00:22:58.569 "trsvcid": "4420" 00:22:58.569 }, 00:22:58.569 "secure_channel": true 00:22:58.569 } 00:22:58.569 } 00:22:58.569 ] 00:22:58.569 } 00:22:58.569 ] 00:22:58.569 }' 00:22:58.569 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.569 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=354430 00:22:58.569 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 354430 00:22:58.569 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:58.569 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354430 ']' 00:22:58.569 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.569 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.569 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.569 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.569 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.569 [2024-12-13 05:38:58.474984] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:58.569 [2024-12-13 05:38:58.475030] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.569 [2024-12-13 05:38:58.549357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.569 [2024-12-13 05:38:58.570100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.569 [2024-12-13 05:38:58.570135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.569 [2024-12-13 05:38:58.570142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.569 [2024-12-13 05:38:58.570148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.569 [2024-12-13 05:38:58.570153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.569 [2024-12-13 05:38:58.570670] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.829 [2024-12-13 05:38:58.778037] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.829 [2024-12-13 05:38:58.810069] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:58.829 [2024-12-13 05:38:58.810263] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=354583 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 354583 /var/tmp/bdevperf.sock 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354583 ']' 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.397 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:59.397 "subsystems": [ 00:22:59.397 { 00:22:59.397 "subsystem": "keyring", 00:22:59.397 "config": [ 00:22:59.397 { 00:22:59.397 "method": "keyring_file_add_key", 00:22:59.397 "params": { 00:22:59.397 "name": "key0", 00:22:59.397 "path": "/tmp/tmp.CSwafsDekC" 00:22:59.397 } 00:22:59.397 } 00:22:59.397 ] 00:22:59.397 }, 00:22:59.397 { 00:22:59.397 "subsystem": "iobuf", 00:22:59.397 "config": [ 00:22:59.397 { 00:22:59.397 "method": "iobuf_set_options", 00:22:59.397 "params": { 00:22:59.397 "small_pool_count": 8192, 00:22:59.397 "large_pool_count": 1024, 00:22:59.397 "small_bufsize": 8192, 00:22:59.397 "large_bufsize": 135168, 00:22:59.397 "enable_numa": false 00:22:59.398 } 00:22:59.398 } 00:22:59.398 ] 00:22:59.398 }, 00:22:59.398 { 00:22:59.398 "subsystem": "sock", 00:22:59.398 "config": [ 00:22:59.398 { 00:22:59.398 "method": "sock_set_default_impl", 00:22:59.398 "params": { 00:22:59.398 "impl_name": "posix" 00:22:59.398 } 00:22:59.398 }, 00:22:59.398 { 00:22:59.398 "method": "sock_impl_set_options", 00:22:59.398 "params": { 00:22:59.398 "impl_name": "ssl", 00:22:59.398 "recv_buf_size": 4096, 00:22:59.398 "send_buf_size": 4096, 00:22:59.398 "enable_recv_pipe": true, 00:22:59.398 "enable_quickack": false, 00:22:59.398 "enable_placement_id": 0, 00:22:59.398 "enable_zerocopy_send_server": true, 00:22:59.398 "enable_zerocopy_send_client": false, 00:22:59.398 "zerocopy_threshold": 0, 00:22:59.398 "tls_version": 0, 00:22:59.398 "enable_ktls": false 00:22:59.398 } 00:22:59.398 }, 00:22:59.398 { 00:22:59.398 "method": "sock_impl_set_options", 00:22:59.398 "params": { 00:22:59.398 "impl_name": "posix", 00:22:59.398 "recv_buf_size": 2097152, 00:22:59.398 "send_buf_size": 2097152, 00:22:59.398 "enable_recv_pipe": true, 00:22:59.398 "enable_quickack": false, 00:22:59.398 "enable_placement_id": 0, 00:22:59.398 "enable_zerocopy_send_server": true, 00:22:59.398 "enable_zerocopy_send_client": false, 00:22:59.398 "zerocopy_threshold": 0, 00:22:59.398 "tls_version": 0, 00:22:59.398 "enable_ktls": false 00:22:59.398 } 00:22:59.398 } 00:22:59.398 ] 00:22:59.398 }, 00:22:59.398 { 00:22:59.398 "subsystem": "vmd", 00:22:59.398 "config": [] 00:22:59.398 }, 00:22:59.398 { 00:22:59.398 "subsystem": "accel", 00:22:59.398 "config": [ 00:22:59.398 { 00:22:59.398 "method": "accel_set_options", 00:22:59.398 "params": { 00:22:59.398 "small_cache_size": 128, 00:22:59.398 "large_cache_size": 16, 00:22:59.398 "task_count": 2048, 00:22:59.398 "sequence_count": 2048, 00:22:59.398 "buf_count": 2048 00:22:59.398 } 00:22:59.398 } 00:22:59.398 ] 00:22:59.398 }, 00:22:59.398 { 00:22:59.398 "subsystem": "bdev", 00:22:59.398 "config": [ 00:22:59.398 { 00:22:59.398 "method": "bdev_set_options", 00:22:59.398 "params": { 00:22:59.398 "bdev_io_pool_size": 65535, 00:22:59.398 "bdev_io_cache_size": 256, 00:22:59.398 "bdev_auto_examine": true, 00:22:59.398 "iobuf_small_cache_size": 128, 00:22:59.398 "iobuf_large_cache_size": 16 00:22:59.398 } 00:22:59.398 }, 00:22:59.398 { 00:22:59.398 "method": "bdev_raid_set_options", 00:22:59.398 "params": { 00:22:59.398 "process_window_size_kb": 1024, 00:22:59.398 "process_max_bandwidth_mb_sec": 0 00:22:59.398 } 00:22:59.398 }, 00:22:59.398 { 00:22:59.398 "method": "bdev_iscsi_set_options", 00:22:59.398 "params": { 00:22:59.398 "timeout_sec": 30 00:22:59.398 } 00:22:59.398 }, 00:22:59.398 { 00:22:59.398 "method": "bdev_nvme_set_options", 00:22:59.398 "params": { 00:22:59.398 "action_on_timeout": "none", 00:22:59.398 "timeout_us": 0, 00:22:59.398 "timeout_admin_us": 0, 00:22:59.398 "keep_alive_timeout_ms": 10000, 00:22:59.398 "arbitration_burst": 0, 00:22:59.398 "low_priority_weight": 0, 00:22:59.398 "medium_priority_weight": 0, 00:22:59.398 "high_priority_weight": 0, 00:22:59.398 "nvme_adminq_poll_period_us": 10000, 00:22:59.398 "nvme_ioq_poll_period_us": 0, 00:22:59.398 "io_queue_requests": 512, 00:22:59.398 "delay_cmd_submit": true, 00:22:59.398 "transport_retry_count": 4, 00:22:59.398 "bdev_retry_count": 3, 00:22:59.398 "transport_ack_timeout": 0, 00:22:59.398 "ctrlr_loss_timeout_sec": 0, 00:22:59.398 "reconnect_delay_sec": 0, 00:22:59.398 "fast_io_fail_timeout_sec": 0, 00:22:59.398 "disable_auto_failback": false, 00:22:59.398 "generate_uuids": false, 00:22:59.398 "transport_tos": 0, 00:22:59.398 "nvme_error_stat": false, 00:22:59.398 "rdma_srq_size": 0, 00:22:59.398 "io_path_stat": false, 00:22:59.398 "allow_accel_sequence": false, 00:22:59.398 "rdma_max_cq_size": 0, 00:22:59.398 "rdma_cm_event_timeout_ms": 0, 00:22:59.398 "dhchap_digests": [ 00:22:59.398 "sha256", 00:22:59.398 "sha384", 00:22:59.398 "sha512" 00:22:59.398 ], 00:22:59.398 "dhchap_dhgroups": [ 00:22:59.398 "null", 00:22:59.398 "ffdhe2048", 00:22:59.398 "ffdhe3072", 00:22:59.398 "ffdhe4096", 00:22:59.398 "ffdhe6144", 00:22:59.398 "ffdhe8192" 00:22:59.398 ], 00:22:59.398 "rdma_umr_per_io": false 00:22:59.398 } 00:22:59.398 }, 00:22:59.398 { 00:22:59.398 "method": "bdev_nvme_attach_controller", 00:22:59.398 "params": { 00:22:59.398 "name": "TLSTEST", 00:22:59.398 "trtype": "TCP", 00:22:59.398 "adrfam": "IPv4", 00:22:59.398 "traddr": "10.0.0.2", 00:22:59.398 "trsvcid": "4420", 00:22:59.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.398 "prchk_reftag": false, 00:22:59.398 "prchk_guard": false, 00:22:59.398 "ctrlr_loss_timeout_sec": 0, 00:22:59.398 "reconnect_delay_sec": 0, 00:22:59.398 "fast_io_fail_timeout_sec": 0, 00:22:59.398 "psk": "key0", 00:22:59.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.398 "hdgst": false, 00:22:59.398 "ddgst": false, 00:22:59.398 "multipath": "multipath" 00:22:59.398 } 00:22:59.398 }, 00:22:59.398 { 00:22:59.398 "method": "bdev_nvme_set_hotplug", 00:22:59.398 "params": { 00:22:59.398 "period_us": 100000, 00:22:59.398 "enable": false 00:22:59.398 } 00:22:59.398 }, 00:22:59.398 { 00:22:59.398 "method": "bdev_wait_for_examine" 00:22:59.398 } 00:22:59.398 ] 00:22:59.398 }, 00:22:59.398 { 00:22:59.398 "subsystem": "nbd", 00:22:59.398 "config": [] 00:22:59.398 } 00:22:59.398 ] 00:22:59.398 }' 00:22:59.398 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.398 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.398 [2024-12-13 05:38:59.383841] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:59.398 [2024-12-13 05:38:59.383887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354583 ] 00:22:59.657 [2024-12-13 05:38:59.457743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.657 [2024-12-13 05:38:59.479445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.657 [2024-12-13 05:38:59.627876] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.225 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.225 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:00.225 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:00.484 Running I/O for 10 seconds... 00:23:02.356 4539.00 IOPS, 17.73 MiB/s [2024-12-13T04:39:03.308Z] 4897.50 IOPS, 19.13 MiB/s [2024-12-13T04:39:04.683Z] 4977.67 IOPS, 19.44 MiB/s [2024-12-13T04:39:05.628Z] 4958.00 IOPS, 19.37 MiB/s [2024-12-13T04:39:06.564Z] 4867.40 IOPS, 19.01 MiB/s [2024-12-13T04:39:07.499Z] 4911.67 IOPS, 19.19 MiB/s [2024-12-13T04:39:08.436Z] 4971.43 IOPS, 19.42 MiB/s [2024-12-13T04:39:09.372Z] 5017.88 IOPS, 19.60 MiB/s [2024-12-13T04:39:10.749Z] 5036.56 IOPS, 19.67 MiB/s [2024-12-13T04:39:10.749Z] 4972.50 IOPS, 19.42 MiB/s 00:23:10.734 Latency(us) 00:23:10.734 [2024-12-13T04:39:10.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.734 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:10.734 Verification LBA range: start 0x0 length 0x2000 00:23:10.734 TLSTESTn1 : 10.02 4973.95 19.43 0.00 0.00 25688.40 4774.77 36200.84 00:23:10.734 [2024-12-13T04:39:10.749Z] =================================================================================================================== 00:23:10.734 [2024-12-13T04:39:10.749Z] Total : 4973.95 19.43 0.00 0.00 25688.40 4774.77 36200.84 00:23:10.734 { 00:23:10.734 "results": [ 00:23:10.734 { 00:23:10.734 "job": "TLSTESTn1", 00:23:10.734 "core_mask": "0x4", 00:23:10.734 "workload": "verify", 00:23:10.734 "status": "finished", 00:23:10.734 "verify_range": { 00:23:10.734 "start": 0, 00:23:10.734 "length": 8192 00:23:10.734 }, 00:23:10.734 "queue_depth": 128, 00:23:10.734 "io_size": 4096, 00:23:10.734 "runtime": 10.022819, 00:23:10.734 "iops": 4973.949943623646, 00:23:10.734 "mibps": 19.429491967279866, 00:23:10.734 "io_failed": 0, 00:23:10.734 "io_timeout": 0, 00:23:10.734 "avg_latency_us": 25688.40172507171, 00:23:10.734 "min_latency_us": 4774.765714285714, 00:23:10.734 "max_latency_us": 36200.8380952381 00:23:10.734 } 00:23:10.734 ], 00:23:10.734 "core_count": 1 00:23:10.734 } 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 354583 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354583 ']' 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354583 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354583 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354583' 00:23:10.734 killing process with pid 354583 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354583 00:23:10.734 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.734 00:23:10.734 Latency(us) 00:23:10.734 [2024-12-13T04:39:10.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.734 [2024-12-13T04:39:10.749Z] =================================================================================================================== 00:23:10.734 [2024-12-13T04:39:10.749Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354583 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 354430 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354430 ']' 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354430 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354430 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354430' 00:23:10.734 killing process with pid 354430 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354430 00:23:10.734 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354430 00:23:10.993 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:10.993 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:10.993 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.993 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.993 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=356883 00:23:10.993 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 356883 00:23:10.993 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:10.993 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 356883 ']' 00:23:10.993 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.993 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.993 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.993 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.993 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.993 [2024-12-13 05:39:10.830785] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:10.993 [2024-12-13 05:39:10.830829] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.993 [2024-12-13 05:39:10.907181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.993 [2024-12-13 05:39:10.928412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.993 [2024-12-13 05:39:10.928455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.993 [2024-12-13 05:39:10.928463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.993 [2024-12-13 05:39:10.928470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.994 [2024-12-13 05:39:10.928475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.994 [2024-12-13 05:39:10.928968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.253 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.253 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:11.253 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.253 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.253 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.253 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.253 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.CSwafsDekC 00:23:11.253 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CSwafsDekC 00:23:11.253 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:11.253 [2024-12-13 05:39:11.227816] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.253 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:11.512 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:11.771 [2024-12-13 05:39:11.592874] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.771 [2024-12-13 05:39:11.593090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.771 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:12.030 malloc0 00:23:12.030 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:12.030 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CSwafsDekC 00:23:12.289 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:12.550 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=357188 00:23:12.550 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:12.550 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:12.550 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 357188 /var/tmp/bdevperf.sock 00:23:12.550 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357188 ']' 00:23:12.550 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.550 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.550 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.550 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.550 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.550 [2024-12-13 05:39:12.392932] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:12.550 [2024-12-13 05:39:12.392981] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357188 ] 00:23:12.550 [2024-12-13 05:39:12.467968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.550 [2024-12-13 05:39:12.490285] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.810 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.810 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:12.810 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CSwafsDekC 00:23:12.810 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:13.069 [2024-12-13 05:39:12.922130] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.069 nvme0n1 00:23:13.069 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:13.328 Running I/O for 1 seconds... 00:23:14.265 4673.00 IOPS, 18.25 MiB/s 00:23:14.265 Latency(us) 00:23:14.265 [2024-12-13T04:39:14.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.265 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:14.265 Verification LBA range: start 0x0 length 0x2000 00:23:14.265 nvme0n1 : 1.02 4705.43 18.38 0.00 0.00 26948.57 6616.02 33953.89 00:23:14.265 [2024-12-13T04:39:14.280Z] =================================================================================================================== 00:23:14.265 [2024-12-13T04:39:14.280Z] Total : 4705.43 18.38 0.00 0.00 26948.57 6616.02 33953.89 00:23:14.265 { 00:23:14.265 "results": [ 00:23:14.265 { 00:23:14.265 "job": "nvme0n1", 00:23:14.265 "core_mask": "0x2", 00:23:14.265 "workload": "verify", 00:23:14.265 "status": "finished", 00:23:14.265 "verify_range": { 00:23:14.265 "start": 0, 00:23:14.265 "length": 8192 00:23:14.265 }, 00:23:14.265 "queue_depth": 128, 00:23:14.265 "io_size": 4096, 00:23:14.265 "runtime": 1.02031, 00:23:14.265 "iops": 4705.432662622145, 00:23:14.265 "mibps": 18.380596338367752, 00:23:14.265 "io_failed": 0, 00:23:14.265 "io_timeout": 0, 00:23:14.265 "avg_latency_us": 26948.572076055585, 00:23:14.265 "min_latency_us": 6616.015238095238, 00:23:14.265 "max_latency_us": 33953.88952380952 00:23:14.265 } 00:23:14.265 ], 00:23:14.265 "core_count": 1 00:23:14.265 } 00:23:14.265 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 357188 00:23:14.265 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357188 ']' 00:23:14.265 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357188 00:23:14.265 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:14.265 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.265 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357188 00:23:14.265 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:14.265 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:14.265 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357188' 00:23:14.265 killing process with pid 357188 00:23:14.265 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357188 00:23:14.265 Received shutdown signal, test time was about 1.000000 seconds 00:23:14.265 00:23:14.265 Latency(us) 00:23:14.265 [2024-12-13T04:39:14.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.265 [2024-12-13T04:39:14.280Z] =================================================================================================================== 00:23:14.265 [2024-12-13T04:39:14.280Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.265 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357188 00:23:14.524 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 356883 00:23:14.524 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 356883 ']' 00:23:14.524 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 356883 00:23:14.524 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:14.524 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.524 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 356883 00:23:14.524 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:14.524 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:14.524 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 356883' 00:23:14.524 killing process with pid 356883 00:23:14.524 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 356883 00:23:14.524 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 356883 00:23:14.784 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:14.784 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.784 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.784 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.784 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=357595 00:23:14.784 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:14.784 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 357595 00:23:14.784 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357595 ']' 00:23:14.784 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.784 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.784 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.784 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.784 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.784 [2024-12-13 05:39:14.627208] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:14.784 [2024-12-13 05:39:14.627253] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.784 [2024-12-13 05:39:14.704301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.784 [2024-12-13 05:39:14.725528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.784 [2024-12-13 05:39:14.725563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.784 [2024-12-13 05:39:14.725571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.784 [2024-12-13 05:39:14.725577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.784 [2024-12-13 05:39:14.725582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.784 [2024-12-13 05:39:14.726040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.043 [2024-12-13 05:39:14.861045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.043 malloc0 00:23:15.043 [2024-12-13 05:39:14.889131] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.043 [2024-12-13 05:39:14.889326] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=357617 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 357617 /var/tmp/bdevperf.sock 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357617 ']' 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.043 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.043 [2024-12-13 05:39:14.963324] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:15.043 [2024-12-13 05:39:14.963366] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357617 ] 00:23:15.043 [2024-12-13 05:39:15.037678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.302 [2024-12-13 05:39:15.059520] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.302 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.302 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.302 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CSwafsDekC 00:23:15.560 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:15.560 [2024-12-13 05:39:15.514624] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.818 nvme0n1 00:23:15.818 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:15.818 Running I/O for 1 seconds... 00:23:16.755 5033.00 IOPS, 19.66 MiB/s 00:23:16.755 Latency(us) 00:23:16.755 [2024-12-13T04:39:16.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.755 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:16.755 Verification LBA range: start 0x0 length 0x2000 00:23:16.755 nvme0n1 : 1.02 5083.45 19.86 0.00 0.00 24986.48 6023.07 28960.67 00:23:16.755 [2024-12-13T04:39:16.770Z] =================================================================================================================== 00:23:16.755 [2024-12-13T04:39:16.770Z] Total : 5083.45 19.86 0.00 0.00 24986.48 6023.07 28960.67 00:23:16.755 { 00:23:16.755 "results": [ 00:23:16.755 { 00:23:16.755 "job": "nvme0n1", 00:23:16.755 "core_mask": "0x2", 00:23:16.755 "workload": "verify", 00:23:16.755 "status": "finished", 00:23:16.755 "verify_range": { 00:23:16.755 "start": 0, 00:23:16.755 "length": 8192 00:23:16.755 }, 00:23:16.755 "queue_depth": 128, 00:23:16.755 "io_size": 4096, 00:23:16.755 "runtime": 1.015256, 00:23:16.755 "iops": 5083.446933581284, 00:23:16.755 "mibps": 19.85721458430189, 00:23:16.755 "io_failed": 0, 00:23:16.755 "io_timeout": 0, 00:23:16.755 "avg_latency_us": 24986.476032514922, 00:23:16.755 "min_latency_us": 6023.070476190476, 00:23:16.755 "max_latency_us": 28960.670476190477 00:23:16.755 } 00:23:16.755 ], 00:23:16.755 "core_count": 1 00:23:16.755 } 00:23:16.755 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:16.755 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.755 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.014 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.014 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:17.014 "subsystems": [ 00:23:17.014 { 00:23:17.014 "subsystem": "keyring", 00:23:17.014 "config": [ 00:23:17.014 { 00:23:17.014 "method": "keyring_file_add_key", 00:23:17.014 "params": { 00:23:17.014 "name": "key0", 00:23:17.014 "path": "/tmp/tmp.CSwafsDekC" 00:23:17.014 } 00:23:17.014 } 00:23:17.014 ] 00:23:17.014 }, 00:23:17.014 { 00:23:17.014 "subsystem": "iobuf", 00:23:17.014 "config": [ 00:23:17.014 { 00:23:17.014 "method": "iobuf_set_options", 00:23:17.014 "params": { 00:23:17.014 "small_pool_count": 8192, 00:23:17.014 "large_pool_count": 1024, 00:23:17.014 "small_bufsize": 8192, 00:23:17.014 "large_bufsize": 135168, 00:23:17.014 "enable_numa": false 00:23:17.014 } 00:23:17.014 } 00:23:17.014 ] 00:23:17.014 }, 00:23:17.014 { 00:23:17.014 "subsystem": "sock", 00:23:17.014 "config": [ 00:23:17.014 { 00:23:17.014 "method": "sock_set_default_impl", 00:23:17.014 "params": { 00:23:17.014 "impl_name": "posix" 00:23:17.014 } 00:23:17.014 }, 00:23:17.014 { 00:23:17.014 "method": "sock_impl_set_options", 00:23:17.014 "params": { 00:23:17.014 "impl_name": "ssl", 00:23:17.014 "recv_buf_size": 4096, 00:23:17.014 "send_buf_size": 4096, 00:23:17.014 "enable_recv_pipe": true, 00:23:17.014 "enable_quickack": false, 00:23:17.014 "enable_placement_id": 0, 00:23:17.014 "enable_zerocopy_send_server": true, 00:23:17.014 "enable_zerocopy_send_client": false, 00:23:17.014 "zerocopy_threshold": 0, 00:23:17.014 "tls_version": 0, 00:23:17.014 "enable_ktls": false 00:23:17.014 } 00:23:17.014 }, 00:23:17.014 { 00:23:17.014 "method": "sock_impl_set_options", 00:23:17.014 "params": { 00:23:17.014 "impl_name": "posix", 00:23:17.014 "recv_buf_size": 2097152, 00:23:17.014 "send_buf_size": 2097152, 00:23:17.014 "enable_recv_pipe": true, 00:23:17.014 "enable_quickack": false, 00:23:17.014 "enable_placement_id": 0, 00:23:17.014 "enable_zerocopy_send_server": true, 00:23:17.014 "enable_zerocopy_send_client": false, 00:23:17.014 "zerocopy_threshold": 0, 00:23:17.014 "tls_version": 0, 00:23:17.014 "enable_ktls": false 00:23:17.014 } 00:23:17.014 } 00:23:17.014 ] 00:23:17.014 }, 00:23:17.014 { 00:23:17.014 "subsystem": "vmd", 00:23:17.014 "config": [] 00:23:17.014 }, 00:23:17.014 { 00:23:17.014 "subsystem": "accel", 00:23:17.014 "config": [ 00:23:17.014 { 00:23:17.014 "method": "accel_set_options", 00:23:17.014 "params": { 00:23:17.014 "small_cache_size": 128, 00:23:17.014 "large_cache_size": 16, 00:23:17.014 "task_count": 2048, 00:23:17.014 "sequence_count": 2048, 00:23:17.014 "buf_count": 2048 00:23:17.014 } 00:23:17.014 } 00:23:17.014 ] 00:23:17.014 }, 00:23:17.014 { 00:23:17.014 "subsystem": "bdev", 00:23:17.014 "config": [ 00:23:17.014 { 00:23:17.014 "method": "bdev_set_options", 00:23:17.014 "params": { 00:23:17.014 "bdev_io_pool_size": 65535, 00:23:17.014 "bdev_io_cache_size": 256, 00:23:17.014 "bdev_auto_examine": true, 00:23:17.014 "iobuf_small_cache_size": 128, 00:23:17.014 "iobuf_large_cache_size": 16 00:23:17.014 } 00:23:17.014 }, 00:23:17.014 { 00:23:17.014 "method": "bdev_raid_set_options", 00:23:17.014 "params": { 00:23:17.014 "process_window_size_kb": 1024, 00:23:17.014 "process_max_bandwidth_mb_sec": 0 00:23:17.014 } 00:23:17.014 }, 00:23:17.014 { 00:23:17.014 "method": "bdev_iscsi_set_options", 00:23:17.014 "params": { 00:23:17.014 "timeout_sec": 30 00:23:17.014 } 00:23:17.014 }, 00:23:17.014 { 00:23:17.014 "method": "bdev_nvme_set_options", 00:23:17.014 "params": { 00:23:17.014 "action_on_timeout": "none", 00:23:17.014 "timeout_us": 0, 00:23:17.014 "timeout_admin_us": 0, 00:23:17.014 "keep_alive_timeout_ms": 10000, 00:23:17.014 "arbitration_burst": 0, 00:23:17.014 "low_priority_weight": 0, 00:23:17.014 "medium_priority_weight": 0, 00:23:17.014 "high_priority_weight": 0, 00:23:17.014 "nvme_adminq_poll_period_us": 10000, 00:23:17.014 "nvme_ioq_poll_period_us": 0, 00:23:17.014 "io_queue_requests": 0, 00:23:17.014 "delay_cmd_submit": true, 00:23:17.014 "transport_retry_count": 4, 00:23:17.014 "bdev_retry_count": 3, 00:23:17.014 "transport_ack_timeout": 0, 00:23:17.014 "ctrlr_loss_timeout_sec": 0, 00:23:17.014 "reconnect_delay_sec": 0, 00:23:17.014 "fast_io_fail_timeout_sec": 0, 00:23:17.014 "disable_auto_failback": false, 00:23:17.014 "generate_uuids": false, 00:23:17.014 "transport_tos": 0, 00:23:17.014 "nvme_error_stat": false, 00:23:17.014 "rdma_srq_size": 0, 00:23:17.014 "io_path_stat": false, 00:23:17.014 "allow_accel_sequence": false, 00:23:17.014 "rdma_max_cq_size": 0, 00:23:17.014 "rdma_cm_event_timeout_ms": 0, 00:23:17.014 "dhchap_digests": [ 00:23:17.014 "sha256", 00:23:17.014 "sha384", 00:23:17.014 "sha512" 00:23:17.014 ], 00:23:17.014 "dhchap_dhgroups": [ 00:23:17.014 "null", 00:23:17.014 "ffdhe2048", 00:23:17.014 "ffdhe3072", 00:23:17.014 "ffdhe4096", 00:23:17.014 "ffdhe6144", 00:23:17.014 "ffdhe8192" 00:23:17.014 ], 00:23:17.014 "rdma_umr_per_io": false 00:23:17.014 } 00:23:17.014 }, 00:23:17.014 { 00:23:17.014 "method": "bdev_nvme_set_hotplug", 00:23:17.014 "params": { 00:23:17.015 "period_us": 100000, 00:23:17.015 "enable": false 00:23:17.015 } 00:23:17.015 }, 00:23:17.015 { 00:23:17.015 "method": "bdev_malloc_create", 00:23:17.015 "params": { 00:23:17.015 "name": "malloc0", 00:23:17.015 "num_blocks": 8192, 00:23:17.015 "block_size": 4096, 00:23:17.015 "physical_block_size": 4096, 00:23:17.015 "uuid": "5f8fef8f-fb29-42be-9385-68dbef1ac2c2", 00:23:17.015 "optimal_io_boundary": 0, 00:23:17.015 "md_size": 0, 00:23:17.015 "dif_type": 0, 00:23:17.015 "dif_is_head_of_md": false, 00:23:17.015 "dif_pi_format": 0 00:23:17.015 } 00:23:17.015 }, 00:23:17.015 { 00:23:17.015 "method": "bdev_wait_for_examine" 00:23:17.015 } 00:23:17.015 ] 00:23:17.015 }, 00:23:17.015 { 00:23:17.015 "subsystem": "nbd", 00:23:17.015 "config": [] 00:23:17.015 }, 00:23:17.015 { 00:23:17.015 "subsystem": "scheduler", 00:23:17.015 "config": [ 00:23:17.015 { 00:23:17.015 "method": "framework_set_scheduler", 00:23:17.015 "params": { 00:23:17.015 "name": "static" 00:23:17.015 } 00:23:17.015 } 00:23:17.015 ] 00:23:17.015 }, 00:23:17.015 { 00:23:17.015 "subsystem": "nvmf", 00:23:17.015 "config": [ 00:23:17.015 { 00:23:17.015 "method": "nvmf_set_config", 00:23:17.015 "params": { 00:23:17.015 "discovery_filter": "match_any", 00:23:17.015 "admin_cmd_passthru": { 00:23:17.015 "identify_ctrlr": false 00:23:17.015 }, 00:23:17.015 "dhchap_digests": [ 00:23:17.015 "sha256", 00:23:17.015 "sha384", 00:23:17.015 "sha512" 00:23:17.015 ], 00:23:17.015 "dhchap_dhgroups": [ 00:23:17.015 "null", 00:23:17.015 "ffdhe2048", 00:23:17.015 "ffdhe3072", 00:23:17.015 "ffdhe4096", 00:23:17.015 "ffdhe6144", 00:23:17.015 "ffdhe8192" 00:23:17.015 ] 00:23:17.015 } 00:23:17.015 }, 00:23:17.015 { 00:23:17.015 "method": "nvmf_set_max_subsystems", 00:23:17.015 "params": { 00:23:17.015 "max_subsystems": 1024 00:23:17.015 } 00:23:17.015 }, 00:23:17.015 { 00:23:17.015 "method": "nvmf_set_crdt", 00:23:17.015 "params": { 00:23:17.015 "crdt1": 0, 00:23:17.015 "crdt2": 0, 00:23:17.015 "crdt3": 0 00:23:17.015 } 00:23:17.015 }, 00:23:17.015 { 00:23:17.015 "method": "nvmf_create_transport", 00:23:17.015 "params": { 00:23:17.015 "trtype": "TCP", 00:23:17.015 "max_queue_depth": 128, 00:23:17.015 "max_io_qpairs_per_ctrlr": 127, 00:23:17.015 "in_capsule_data_size": 4096, 00:23:17.015 "max_io_size": 131072, 00:23:17.015 "io_unit_size": 131072, 00:23:17.015 "max_aq_depth": 128, 00:23:17.015 "num_shared_buffers": 511, 00:23:17.015 "buf_cache_size": 4294967295, 00:23:17.015 "dif_insert_or_strip": false, 00:23:17.015 "zcopy": false, 00:23:17.015 "c2h_success": false, 00:23:17.015 "sock_priority": 0, 00:23:17.015 "abort_timeout_sec": 1, 00:23:17.015 "ack_timeout": 0, 00:23:17.015 "data_wr_pool_size": 0 00:23:17.015 } 00:23:17.015 }, 00:23:17.015 { 00:23:17.015 "method": "nvmf_create_subsystem", 00:23:17.015 "params": { 00:23:17.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.015 "allow_any_host": false, 00:23:17.015 "serial_number": "00000000000000000000", 00:23:17.015 "model_number": "SPDK bdev Controller", 00:23:17.015 "max_namespaces": 32, 00:23:17.015 "min_cntlid": 1, 00:23:17.015 "max_cntlid": 65519, 00:23:17.015 "ana_reporting": false 00:23:17.015 } 00:23:17.015 }, 00:23:17.015 { 00:23:17.015 "method": "nvmf_subsystem_add_host", 00:23:17.015 "params": { 00:23:17.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.015 "host": "nqn.2016-06.io.spdk:host1", 00:23:17.015 "psk": "key0" 00:23:17.015 } 00:23:17.015 }, 00:23:17.015 { 00:23:17.015 "method": "nvmf_subsystem_add_ns", 00:23:17.015 "params": { 00:23:17.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.015 "namespace": { 00:23:17.015 "nsid": 1, 00:23:17.015 "bdev_name": "malloc0", 00:23:17.015 "nguid": "5F8FEF8FFB2942BE938568DBEF1AC2C2", 00:23:17.015 "uuid": "5f8fef8f-fb29-42be-9385-68dbef1ac2c2", 00:23:17.015 "no_auto_visible": false 00:23:17.015 } 00:23:17.015 } 00:23:17.015 }, 00:23:17.015 { 00:23:17.015 "method": "nvmf_subsystem_add_listener", 00:23:17.015 "params": { 00:23:17.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.015 "listen_address": { 00:23:17.015 "trtype": "TCP", 00:23:17.015 "adrfam": "IPv4", 00:23:17.015 "traddr": "10.0.0.2", 00:23:17.015 "trsvcid": "4420" 00:23:17.015 }, 00:23:17.015 "secure_channel": false, 00:23:17.015 "sock_impl": "ssl" 00:23:17.015 } 00:23:17.015 } 00:23:17.015 ] 00:23:17.015 } 00:23:17.015 ] 00:23:17.015 }' 00:23:17.015 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:17.275 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:17.275 "subsystems": [ 00:23:17.275 { 00:23:17.275 "subsystem": "keyring", 00:23:17.275 "config": [ 00:23:17.275 { 00:23:17.275 "method": "keyring_file_add_key", 00:23:17.275 "params": { 00:23:17.275 "name": "key0", 00:23:17.275 "path": "/tmp/tmp.CSwafsDekC" 00:23:17.275 } 00:23:17.275 } 00:23:17.275 ] 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "subsystem": "iobuf", 00:23:17.275 "config": [ 00:23:17.275 { 00:23:17.275 "method": "iobuf_set_options", 00:23:17.275 "params": { 00:23:17.275 "small_pool_count": 8192, 00:23:17.275 "large_pool_count": 1024, 00:23:17.275 "small_bufsize": 8192, 00:23:17.275 "large_bufsize": 135168, 00:23:17.275 "enable_numa": false 00:23:17.275 } 00:23:17.275 } 00:23:17.275 ] 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "subsystem": "sock", 00:23:17.275 "config": [ 00:23:17.275 { 00:23:17.275 "method": "sock_set_default_impl", 00:23:17.275 "params": { 00:23:17.275 "impl_name": "posix" 00:23:17.275 } 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "method": "sock_impl_set_options", 00:23:17.275 "params": { 00:23:17.275 "impl_name": "ssl", 00:23:17.275 "recv_buf_size": 4096, 00:23:17.275 "send_buf_size": 4096, 00:23:17.275 "enable_recv_pipe": true, 00:23:17.275 "enable_quickack": false, 00:23:17.275 "enable_placement_id": 0, 00:23:17.275 "enable_zerocopy_send_server": true, 00:23:17.275 "enable_zerocopy_send_client": false, 00:23:17.275 "zerocopy_threshold": 0, 00:23:17.275 "tls_version": 0, 00:23:17.275 "enable_ktls": false 00:23:17.275 } 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "method": "sock_impl_set_options", 00:23:17.275 "params": { 00:23:17.275 "impl_name": "posix", 00:23:17.275 "recv_buf_size": 2097152, 00:23:17.275 "send_buf_size": 2097152, 00:23:17.275 "enable_recv_pipe": true, 00:23:17.275 "enable_quickack": false, 00:23:17.275 "enable_placement_id": 0, 00:23:17.275 "enable_zerocopy_send_server": true, 00:23:17.275 "enable_zerocopy_send_client": false, 00:23:17.275 "zerocopy_threshold": 0, 00:23:17.275 "tls_version": 0, 00:23:17.275 "enable_ktls": false 00:23:17.275 } 00:23:17.275 } 00:23:17.275 ] 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "subsystem": "vmd", 00:23:17.275 "config": [] 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "subsystem": "accel", 00:23:17.275 "config": [ 00:23:17.275 { 00:23:17.275 "method": "accel_set_options", 00:23:17.275 "params": { 00:23:17.275 "small_cache_size": 128, 00:23:17.275 "large_cache_size": 16, 00:23:17.275 "task_count": 2048, 00:23:17.275 "sequence_count": 2048, 00:23:17.275 "buf_count": 2048 00:23:17.275 } 00:23:17.275 } 00:23:17.275 ] 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "subsystem": "bdev", 00:23:17.275 "config": [ 00:23:17.275 { 00:23:17.275 "method": "bdev_set_options", 00:23:17.275 "params": { 00:23:17.275 "bdev_io_pool_size": 65535, 00:23:17.275 "bdev_io_cache_size": 256, 00:23:17.275 "bdev_auto_examine": true, 00:23:17.275 "iobuf_small_cache_size": 128, 00:23:17.275 "iobuf_large_cache_size": 16 00:23:17.275 } 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "method": "bdev_raid_set_options", 00:23:17.275 "params": { 00:23:17.275 "process_window_size_kb": 1024, 00:23:17.275 "process_max_bandwidth_mb_sec": 0 00:23:17.275 } 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "method": "bdev_iscsi_set_options", 00:23:17.275 "params": { 00:23:17.275 "timeout_sec": 30 00:23:17.275 } 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "method": "bdev_nvme_set_options", 00:23:17.275 "params": { 00:23:17.275 "action_on_timeout": "none", 00:23:17.275 "timeout_us": 0, 00:23:17.275 "timeout_admin_us": 0, 00:23:17.275 "keep_alive_timeout_ms": 10000, 00:23:17.275 "arbitration_burst": 0, 00:23:17.275 "low_priority_weight": 0, 00:23:17.275 "medium_priority_weight": 0, 00:23:17.275 "high_priority_weight": 0, 00:23:17.275 "nvme_adminq_poll_period_us": 10000, 00:23:17.275 "nvme_ioq_poll_period_us": 0, 00:23:17.275 "io_queue_requests": 512, 00:23:17.275 "delay_cmd_submit": true, 00:23:17.275 "transport_retry_count": 4, 00:23:17.275 "bdev_retry_count": 3, 00:23:17.275 "transport_ack_timeout": 0, 00:23:17.275 "ctrlr_loss_timeout_sec": 0, 00:23:17.275 "reconnect_delay_sec": 0, 00:23:17.275 "fast_io_fail_timeout_sec": 0, 00:23:17.275 "disable_auto_failback": false, 00:23:17.275 "generate_uuids": false, 00:23:17.275 "transport_tos": 0, 00:23:17.275 "nvme_error_stat": false, 00:23:17.275 "rdma_srq_size": 0, 00:23:17.275 "io_path_stat": false, 00:23:17.275 "allow_accel_sequence": false, 00:23:17.275 "rdma_max_cq_size": 0, 00:23:17.275 "rdma_cm_event_timeout_ms": 0, 00:23:17.275 "dhchap_digests": [ 00:23:17.275 "sha256", 00:23:17.275 "sha384", 00:23:17.275 "sha512" 00:23:17.275 ], 00:23:17.275 "dhchap_dhgroups": [ 00:23:17.275 "null", 00:23:17.275 "ffdhe2048", 00:23:17.275 "ffdhe3072", 00:23:17.275 "ffdhe4096", 00:23:17.275 "ffdhe6144", 00:23:17.275 "ffdhe8192" 00:23:17.275 ], 00:23:17.275 "rdma_umr_per_io": false 00:23:17.275 } 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "method": "bdev_nvme_attach_controller", 00:23:17.275 "params": { 00:23:17.275 "name": "nvme0", 00:23:17.275 "trtype": "TCP", 00:23:17.275 "adrfam": "IPv4", 00:23:17.275 "traddr": "10.0.0.2", 00:23:17.275 "trsvcid": "4420", 00:23:17.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.275 "prchk_reftag": false, 00:23:17.275 "prchk_guard": false, 00:23:17.275 "ctrlr_loss_timeout_sec": 0, 00:23:17.275 "reconnect_delay_sec": 0, 00:23:17.275 "fast_io_fail_timeout_sec": 0, 00:23:17.275 "psk": "key0", 00:23:17.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.275 "hdgst": false, 00:23:17.275 "ddgst": false, 00:23:17.275 "multipath": "multipath" 00:23:17.275 } 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "method": "bdev_nvme_set_hotplug", 00:23:17.275 "params": { 00:23:17.275 "period_us": 100000, 00:23:17.275 "enable": false 00:23:17.275 } 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "method": "bdev_enable_histogram", 00:23:17.275 "params": { 00:23:17.275 "name": "nvme0n1", 00:23:17.275 "enable": true 00:23:17.275 } 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "method": "bdev_wait_for_examine" 00:23:17.275 } 00:23:17.275 ] 00:23:17.275 }, 00:23:17.275 { 00:23:17.275 "subsystem": "nbd", 00:23:17.275 "config": [] 00:23:17.275 } 00:23:17.275 ] 00:23:17.275 }' 00:23:17.275 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 357617 00:23:17.275 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357617 ']' 00:23:17.275 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357617 00:23:17.275 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.275 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.275 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357617 00:23:17.275 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:17.275 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:17.275 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357617' 00:23:17.275 killing process with pid 357617 00:23:17.275 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357617 00:23:17.275 Received shutdown signal, test time was about 1.000000 seconds 00:23:17.275 00:23:17.275 Latency(us) 00:23:17.275 [2024-12-13T04:39:17.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.275 [2024-12-13T04:39:17.290Z] =================================================================================================================== 00:23:17.275 [2024-12-13T04:39:17.291Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.276 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357617 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 357595 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357595 ']' 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357595 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357595 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357595' 00:23:17.535 killing process with pid 357595 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357595 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357595 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.535 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:17.535 "subsystems": [ 00:23:17.535 { 00:23:17.535 "subsystem": "keyring", 00:23:17.535 "config": [ 00:23:17.535 { 00:23:17.535 "method": "keyring_file_add_key", 00:23:17.535 "params": { 00:23:17.535 "name": "key0", 00:23:17.535 "path": "/tmp/tmp.CSwafsDekC" 00:23:17.535 } 00:23:17.535 } 00:23:17.535 ] 00:23:17.535 }, 00:23:17.535 { 00:23:17.535 "subsystem": "iobuf", 00:23:17.536 "config": [ 00:23:17.536 { 00:23:17.536 "method": "iobuf_set_options", 00:23:17.536 "params": { 00:23:17.536 "small_pool_count": 8192, 00:23:17.536 "large_pool_count": 1024, 00:23:17.536 "small_bufsize": 8192, 00:23:17.536 "large_bufsize": 135168, 00:23:17.536 "enable_numa": false 00:23:17.536 } 00:23:17.536 } 00:23:17.536 ] 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "subsystem": "sock", 00:23:17.536 "config": [ 00:23:17.536 { 00:23:17.536 "method": "sock_set_default_impl", 00:23:17.536 "params": { 00:23:17.536 "impl_name": "posix" 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "sock_impl_set_options", 00:23:17.536 "params": { 00:23:17.536 "impl_name": "ssl", 00:23:17.536 "recv_buf_size": 4096, 00:23:17.536 "send_buf_size": 4096, 00:23:17.536 "enable_recv_pipe": true, 00:23:17.536 "enable_quickack": false, 00:23:17.536 "enable_placement_id": 0, 00:23:17.536 "enable_zerocopy_send_server": true, 00:23:17.536 "enable_zerocopy_send_client": false, 00:23:17.536 "zerocopy_threshold": 0, 00:23:17.536 "tls_version": 0, 00:23:17.536 "enable_ktls": false 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "sock_impl_set_options", 00:23:17.536 "params": { 00:23:17.536 "impl_name": "posix", 00:23:17.536 "recv_buf_size": 2097152, 00:23:17.536 "send_buf_size": 2097152, 00:23:17.536 "enable_recv_pipe": true, 00:23:17.536 "enable_quickack": false, 00:23:17.536 "enable_placement_id": 0, 00:23:17.536 "enable_zerocopy_send_server": true, 00:23:17.536 "enable_zerocopy_send_client": false, 00:23:17.536 "zerocopy_threshold": 0, 00:23:17.536 "tls_version": 0, 00:23:17.536 "enable_ktls": false 00:23:17.536 } 00:23:17.536 } 00:23:17.536 ] 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "subsystem": "vmd", 00:23:17.536 "config": [] 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "subsystem": "accel", 00:23:17.536 "config": [ 00:23:17.536 { 00:23:17.536 "method": "accel_set_options", 00:23:17.536 "params": { 00:23:17.536 "small_cache_size": 128, 00:23:17.536 "large_cache_size": 16, 00:23:17.536 "task_count": 2048, 00:23:17.536 "sequence_count": 2048, 00:23:17.536 "buf_count": 2048 00:23:17.536 } 00:23:17.536 } 00:23:17.536 ] 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "subsystem": "bdev", 00:23:17.536 "config": [ 00:23:17.536 { 00:23:17.536 "method": "bdev_set_options", 00:23:17.536 "params": { 00:23:17.536 "bdev_io_pool_size": 65535, 00:23:17.536 "bdev_io_cache_size": 256, 00:23:17.536 "bdev_auto_examine": true, 00:23:17.536 "iobuf_small_cache_size": 128, 00:23:17.536 "iobuf_large_cache_size": 16 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "bdev_raid_set_options", 00:23:17.536 "params": { 00:23:17.536 "process_window_size_kb": 1024, 00:23:17.536 "process_max_bandwidth_mb_sec": 0 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "bdev_iscsi_set_options", 00:23:17.536 "params": { 00:23:17.536 "timeout_sec": 30 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "bdev_nvme_set_options", 00:23:17.536 "params": { 00:23:17.536 "action_on_timeout": "none", 00:23:17.536 "timeout_us": 0, 00:23:17.536 "timeout_admin_us": 0, 00:23:17.536 "keep_alive_timeout_ms": 10000, 00:23:17.536 "arbitration_burst": 0, 00:23:17.536 "low_priority_weight": 0, 00:23:17.536 "medium_priority_weight": 0, 00:23:17.536 "high_priority_weight": 0, 00:23:17.536 "nvme_adminq_poll_period_us": 10000, 00:23:17.536 "nvme_ioq_poll_period_us": 0, 00:23:17.536 "io_queue_requests": 0, 00:23:17.536 "delay_cmd_submit": true, 00:23:17.536 "transport_retry_count": 4, 00:23:17.536 "bdev_retry_count": 3, 00:23:17.536 "transport_ack_timeout": 0, 00:23:17.536 "ctrlr_loss_timeout_sec": 0, 00:23:17.536 "reconnect_delay_sec": 0, 00:23:17.536 "fast_io_fail_timeout_sec": 0, 00:23:17.536 "disable_auto_failback": false, 00:23:17.536 "generate_uuids": false, 00:23:17.536 "transport_tos": 0, 00:23:17.536 "nvme_error_stat": false, 00:23:17.536 "rdma_srq_size": 0, 00:23:17.536 "io_path_stat": false, 00:23:17.536 "allow_accel_sequence": false, 00:23:17.536 "rdma_max_cq_size": 0, 00:23:17.536 "rdma_cm_event_timeout_ms": 0, 00:23:17.536 "dhchap_digests": [ 00:23:17.536 "sha256", 00:23:17.536 "sha384", 00:23:17.536 "sha512" 00:23:17.536 ], 00:23:17.536 "dhchap_dhgroups": [ 00:23:17.536 "null", 00:23:17.536 "ffdhe2048", 00:23:17.536 "ffdhe3072", 00:23:17.536 "ffdhe4096", 00:23:17.536 "ffdhe6144", 00:23:17.536 "ffdhe8192" 00:23:17.536 ], 00:23:17.536 "rdma_umr_per_io": false 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "bdev_nvme_set_hotplug", 00:23:17.536 "params": { 00:23:17.536 "period_us": 100000, 00:23:17.536 "enable": false 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "bdev_malloc_create", 00:23:17.536 "params": { 00:23:17.536 "name": "malloc0", 00:23:17.536 "num_blocks": 8192, 00:23:17.536 "block_size": 4096, 00:23:17.536 "physical_block_size": 4096, 00:23:17.536 "uuid": "5f8fef8f-fb29-42be-9385-68dbef1ac2c2", 00:23:17.536 "optimal_io_boundary": 0, 00:23:17.536 "md_size": 0, 00:23:17.536 "dif_type": 0, 00:23:17.536 "dif_is_head_of_md": false, 00:23:17.536 "dif_pi_format": 0 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "bdev_wait_for_examine" 00:23:17.536 } 00:23:17.536 ] 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "subsystem": "nbd", 00:23:17.536 "config": [] 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "subsystem": "scheduler", 00:23:17.536 "config": [ 00:23:17.536 { 00:23:17.536 "method": "framework_set_scheduler", 00:23:17.536 "params": { 00:23:17.536 "name": "static" 00:23:17.536 } 00:23:17.536 } 00:23:17.536 ] 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "subsystem": "nvmf", 00:23:17.536 "config": [ 00:23:17.536 { 00:23:17.536 "method": "nvmf_set_config", 00:23:17.536 "params": { 00:23:17.536 "discovery_filter": "match_any", 00:23:17.536 "admin_cmd_passthru": { 00:23:17.536 "identify_ctrlr": false 00:23:17.536 }, 00:23:17.536 "dhchap_digests": [ 00:23:17.536 "sha256", 00:23:17.536 "sha384", 00:23:17.536 "sha512" 00:23:17.536 ], 00:23:17.536 "dhchap_dhgroups": [ 00:23:17.536 "null", 00:23:17.536 "ffdhe2048", 00:23:17.536 "ffdhe3072", 00:23:17.536 "ffdhe4096", 00:23:17.536 "ffdhe6144", 00:23:17.536 "ffdhe8192" 00:23:17.536 ] 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "nvmf_set_max_subsystems", 00:23:17.536 "params": { 00:23:17.536 "max_subsystems": 1024 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "nvmf_set_crdt", 00:23:17.536 "params": { 00:23:17.536 "crdt1": 0, 00:23:17.536 "crdt2": 0, 00:23:17.536 "crdt3": 0 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "nvmf_create_transport", 00:23:17.536 "params": { 00:23:17.536 "trtype": "TCP", 00:23:17.536 "max_queue_depth": 128, 00:23:17.536 "max_io_qpairs_per_ctrlr": 127, 00:23:17.536 "in_capsule_data_size": 4096, 00:23:17.536 "max_io_size": 131072, 00:23:17.536 "io_unit_size": 131072, 00:23:17.536 "max_aq_depth": 128, 00:23:17.536 "num_shared_buffers": 511, 00:23:17.536 "buf_cache_size": 4294967295, 00:23:17.536 "dif_insert_or_strip": false, 00:23:17.536 "zcopy": false, 00:23:17.536 "c2h_success": false, 00:23:17.536 "sock_priority": 0, 00:23:17.536 "abort_timeout_sec": 1, 00:23:17.536 "ack_timeout": 0, 00:23:17.536 "data_wr_pool_size": 0 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "nvmf_create_subsystem", 00:23:17.536 "params": { 00:23:17.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.536 "allow_any_host": false, 00:23:17.536 "serial_number": "00000000000000000000", 00:23:17.536 "model_number": "SPDK bdev Controller", 00:23:17.536 "max_namespaces": 32, 00:23:17.536 "min_cntlid": 1, 00:23:17.536 "max_cntlid": 65519, 00:23:17.536 "ana_reporting": false 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "nvmf_subsystem_add_host", 00:23:17.536 "params": { 00:23:17.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.536 "host": "nqn.2016-06.io.spdk:host1", 00:23:17.536 "psk": "key0" 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "nvmf_subsystem_add_ns", 00:23:17.536 "params": { 00:23:17.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.536 "namespace": { 00:23:17.536 "nsid": 1, 00:23:17.536 "bdev_name": "malloc0", 00:23:17.536 "nguid": "5F8FEF8FFB2942BE938568DBEF1AC2C2", 00:23:17.536 "uuid": "5f8fef8f-fb29-42be-9385-68dbef1ac2c2", 00:23:17.536 "no_auto_visible": false 00:23:17.536 } 00:23:17.536 } 00:23:17.536 }, 00:23:17.536 { 00:23:17.536 "method": "nvmf_subsystem_add_listener", 00:23:17.536 "params": { 00:23:17.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.536 "listen_address": { 00:23:17.536 "trtype": "TCP", 00:23:17.536 "adrfam": "IPv4", 00:23:17.536 "traddr": "10.0.0.2", 00:23:17.536 "trsvcid": "4420" 00:23:17.536 }, 00:23:17.536 "secure_channel": false, 00:23:17.536 "sock_impl": "ssl" 00:23:17.536 } 00:23:17.536 } 00:23:17.536 ] 00:23:17.536 } 00:23:17.536 ] 00:23:17.536 }' 00:23:17.536 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.537 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=358082 00:23:17.537 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:17.537 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 358082 00:23:17.537 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 358082 ']' 00:23:17.537 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.537 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.537 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.537 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.537 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.796 [2024-12-13 05:39:17.580713] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:17.796 [2024-12-13 05:39:17.580759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.796 [2024-12-13 05:39:17.657543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.796 [2024-12-13 05:39:17.678387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.796 [2024-12-13 05:39:17.678424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.796 [2024-12-13 05:39:17.678431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.796 [2024-12-13 05:39:17.678437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.796 [2024-12-13 05:39:17.678442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.796 [2024-12-13 05:39:17.678972] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.055 [2024-12-13 05:39:17.886319] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.055 [2024-12-13 05:39:17.918355] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.055 [2024-12-13 05:39:17.918562] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.622 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.622 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.622 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:18.622 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.622 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.622 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.622 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=358319 00:23:18.622 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 358319 /var/tmp/bdevperf.sock 00:23:18.622 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 358319 ']' 00:23:18.622 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.622 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:18.622 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.622 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.623 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:18.623 "subsystems": [ 00:23:18.623 { 00:23:18.623 "subsystem": "keyring", 00:23:18.623 "config": [ 00:23:18.623 { 00:23:18.623 "method": "keyring_file_add_key", 00:23:18.623 "params": { 00:23:18.623 "name": "key0", 00:23:18.623 "path": "/tmp/tmp.CSwafsDekC" 00:23:18.623 } 00:23:18.623 } 00:23:18.623 ] 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "subsystem": "iobuf", 00:23:18.623 "config": [ 00:23:18.623 { 00:23:18.623 "method": "iobuf_set_options", 00:23:18.623 "params": { 00:23:18.623 "small_pool_count": 8192, 00:23:18.623 "large_pool_count": 1024, 00:23:18.623 "small_bufsize": 8192, 00:23:18.623 "large_bufsize": 135168, 00:23:18.623 "enable_numa": false 00:23:18.623 } 00:23:18.623 } 00:23:18.623 ] 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "subsystem": "sock", 00:23:18.623 "config": [ 00:23:18.623 { 00:23:18.623 "method": "sock_set_default_impl", 00:23:18.623 "params": { 00:23:18.623 "impl_name": "posix" 00:23:18.623 } 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "method": "sock_impl_set_options", 00:23:18.623 "params": { 00:23:18.623 "impl_name": "ssl", 00:23:18.623 "recv_buf_size": 4096, 00:23:18.623 "send_buf_size": 4096, 00:23:18.623 "enable_recv_pipe": true, 00:23:18.623 "enable_quickack": false, 00:23:18.623 "enable_placement_id": 0, 00:23:18.623 "enable_zerocopy_send_server": true, 00:23:18.623 "enable_zerocopy_send_client": false, 00:23:18.623 "zerocopy_threshold": 0, 00:23:18.623 "tls_version": 0, 00:23:18.623 "enable_ktls": false 00:23:18.623 } 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "method": "sock_impl_set_options", 00:23:18.623 "params": { 00:23:18.623 "impl_name": "posix", 00:23:18.623 "recv_buf_size": 2097152, 00:23:18.623 "send_buf_size": 2097152, 00:23:18.623 "enable_recv_pipe": true, 00:23:18.623 "enable_quickack": false, 00:23:18.623 "enable_placement_id": 0, 00:23:18.623 "enable_zerocopy_send_server": true, 00:23:18.623 "enable_zerocopy_send_client": false, 00:23:18.623 "zerocopy_threshold": 0, 00:23:18.623 "tls_version": 0, 00:23:18.623 "enable_ktls": false 00:23:18.623 } 00:23:18.623 } 00:23:18.623 ] 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "subsystem": "vmd", 00:23:18.623 "config": [] 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "subsystem": "accel", 00:23:18.623 "config": [ 00:23:18.623 { 00:23:18.623 "method": "accel_set_options", 00:23:18.623 "params": { 00:23:18.623 "small_cache_size": 128, 00:23:18.623 "large_cache_size": 16, 00:23:18.623 "task_count": 2048, 00:23:18.623 "sequence_count": 2048, 00:23:18.623 "buf_count": 2048 00:23:18.623 } 00:23:18.623 } 00:23:18.623 ] 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "subsystem": "bdev", 00:23:18.623 "config": [ 00:23:18.623 { 00:23:18.623 "method": "bdev_set_options", 00:23:18.623 "params": { 00:23:18.623 "bdev_io_pool_size": 65535, 00:23:18.623 "bdev_io_cache_size": 256, 00:23:18.623 "bdev_auto_examine": true, 00:23:18.623 "iobuf_small_cache_size": 128, 00:23:18.623 "iobuf_large_cache_size": 16 00:23:18.623 } 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "method": "bdev_raid_set_options", 00:23:18.623 "params": { 00:23:18.623 "process_window_size_kb": 1024, 00:23:18.623 "process_max_bandwidth_mb_sec": 0 00:23:18.623 } 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "method": "bdev_iscsi_set_options", 00:23:18.623 "params": { 00:23:18.623 "timeout_sec": 30 00:23:18.623 } 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "method": "bdev_nvme_set_options", 00:23:18.623 "params": { 00:23:18.623 "action_on_timeout": "none", 00:23:18.623 "timeout_us": 0, 00:23:18.623 "timeout_admin_us": 0, 00:23:18.623 "keep_alive_timeout_ms": 10000, 00:23:18.623 "arbitration_burst": 0, 00:23:18.623 "low_priority_weight": 0, 00:23:18.623 "medium_priority_weight": 0, 00:23:18.623 "high_priority_weight": 0, 00:23:18.623 "nvme_adminq_poll_period_us": 10000, 00:23:18.623 "nvme_ioq_poll_period_us": 0, 00:23:18.623 "io_queue_requests": 512, 00:23:18.623 "delay_cmd_submit": true, 00:23:18.623 "transport_retry_count": 4, 00:23:18.623 "bdev_retry_count": 3, 00:23:18.623 "transport_ack_timeout": 0, 00:23:18.623 "ctrlr_loss_timeout_sec": 0, 00:23:18.623 "reconnect_delay_sec": 0, 00:23:18.623 "fast_io_fail_timeout_sec": 0, 00:23:18.623 "disable_auto_failback": false, 00:23:18.623 "generate_uuids": false, 00:23:18.623 "transport_tos": 0, 00:23:18.623 "nvme_error_stat": false, 00:23:18.623 "rdma_srq_size": 0, 00:23:18.623 "io_path_stat": false, 00:23:18.623 "allow_accel_sequence": false, 00:23:18.623 "rdma_max_cq_size": 0, 00:23:18.623 "rdma_cm_event_timeout_ms": 0, 00:23:18.623 "dhchap_digests": [ 00:23:18.623 "sha256", 00:23:18.623 "sha384", 00:23:18.623 "sha512" 00:23:18.623 ], 00:23:18.623 "dhchap_dhgroups": [ 00:23:18.623 "null", 00:23:18.623 "ffdhe2048", 00:23:18.623 "ffdhe3072", 00:23:18.623 "ffdhe4096", 00:23:18.623 "ffdhe6144", 00:23:18.623 "ffdhe8192" 00:23:18.623 ], 00:23:18.623 "rdma_umr_per_io": false 00:23:18.623 } 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "method": "bdev_nvme_attach_controller", 00:23:18.623 "params": { 00:23:18.623 "name": "nvme0", 00:23:18.623 "trtype": "TCP", 00:23:18.623 "adrfam": "IPv4", 00:23:18.623 "traddr": "10.0.0.2", 00:23:18.623 "trsvcid": "4420", 00:23:18.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.623 "prchk_reftag": false, 00:23:18.623 "prchk_guard": false, 00:23:18.623 "ctrlr_loss_timeout_sec": 0, 00:23:18.623 "reconnect_delay_sec": 0, 00:23:18.623 "fast_io_fail_timeout_sec": 0, 00:23:18.623 "psk": "key0", 00:23:18.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.623 "hdgst": false, 00:23:18.623 "ddgst": false, 00:23:18.623 "multipath": "multipath" 00:23:18.623 } 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "method": "bdev_nvme_set_hotplug", 00:23:18.623 "params": { 00:23:18.623 "period_us": 100000, 00:23:18.623 "enable": false 00:23:18.623 } 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "method": "bdev_enable_histogram", 00:23:18.623 "params": { 00:23:18.623 "name": "nvme0n1", 00:23:18.623 "enable": true 00:23:18.623 } 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "method": "bdev_wait_for_examine" 00:23:18.623 } 00:23:18.623 ] 00:23:18.623 }, 00:23:18.623 { 00:23:18.623 "subsystem": "nbd", 00:23:18.623 "config": [] 00:23:18.623 } 00:23:18.623 ] 00:23:18.623 }' 00:23:18.623 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.623 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.623 [2024-12-13 05:39:18.498624] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:18.623 [2024-12-13 05:39:18.498671] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358319 ] 00:23:18.623 [2024-12-13 05:39:18.569762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.623 [2024-12-13 05:39:18.591488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.882 [2024-12-13 05:39:18.739863] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.449 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.449 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:19.450 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:19.450 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:19.708 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.708 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:19.708 Running I/O for 1 seconds... 00:23:20.645 5110.00 IOPS, 19.96 MiB/s 00:23:20.645 Latency(us) 00:23:20.645 [2024-12-13T04:39:20.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.645 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:20.645 Verification LBA range: start 0x0 length 0x2000 00:23:20.645 nvme0n1 : 1.02 5158.82 20.15 0.00 0.00 24622.70 5024.43 43940.33 00:23:20.645 [2024-12-13T04:39:20.660Z] =================================================================================================================== 00:23:20.645 [2024-12-13T04:39:20.660Z] Total : 5158.82 20.15 0.00 0.00 24622.70 5024.43 43940.33 00:23:20.645 { 00:23:20.645 "results": [ 00:23:20.645 { 00:23:20.645 "job": "nvme0n1", 00:23:20.645 "core_mask": "0x2", 00:23:20.645 "workload": "verify", 00:23:20.645 "status": "finished", 00:23:20.645 "verify_range": { 00:23:20.645 "start": 0, 00:23:20.645 "length": 8192 00:23:20.645 }, 00:23:20.645 "queue_depth": 128, 00:23:20.645 "io_size": 4096, 00:23:20.645 "runtime": 1.015349, 00:23:20.645 "iops": 5158.817313061814, 00:23:20.645 "mibps": 20.151630129147712, 00:23:20.645 "io_failed": 0, 00:23:20.645 "io_timeout": 0, 00:23:20.645 "avg_latency_us": 24622.69822796778, 00:23:20.645 "min_latency_us": 5024.426666666666, 00:23:20.645 "max_latency_us": 43940.32761904762 00:23:20.645 } 00:23:20.645 ], 00:23:20.645 "core_count": 1 00:23:20.645 } 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:20.904 nvmf_trace.0 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 358319 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 358319 ']' 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 358319 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358319 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358319' 00:23:20.904 killing process with pid 358319 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 358319 00:23:20.904 Received shutdown signal, test time was about 1.000000 seconds 00:23:20.904 00:23:20.904 Latency(us) 00:23:20.904 [2024-12-13T04:39:20.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.904 [2024-12-13T04:39:20.919Z] =================================================================================================================== 00:23:20.904 [2024-12-13T04:39:20.919Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.904 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 358319 00:23:21.163 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:21.163 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:21.163 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:21.163 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:21.163 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:21.163 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.163 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:21.163 rmmod nvme_tcp 00:23:21.163 rmmod nvme_fabrics 00:23:21.163 rmmod nvme_keyring 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 358082 ']' 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 358082 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 358082 ']' 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 358082 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358082 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358082' 00:23:21.163 killing process with pid 358082 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 358082 00:23:21.163 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 358082 00:23:21.422 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:21.422 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:21.422 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:21.423 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:21.423 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:21.423 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:21.423 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:21.423 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:21.423 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:21.423 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.423 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.423 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.327 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:23.327 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.sOkPLUf3g9 /tmp/tmp.JVMOXBRpTI /tmp/tmp.CSwafsDekC 00:23:23.327 00:23:23.327 real 1m18.936s 00:23:23.327 user 2m1.583s 00:23:23.327 sys 0m29.499s 00:23:23.327 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.327 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.327 ************************************ 00:23:23.327 END TEST nvmf_tls 00:23:23.327 ************************************ 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:23.587 ************************************ 00:23:23.587 START TEST nvmf_fips 00:23:23.587 ************************************ 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:23.587 * Looking for test storage... 00:23:23.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:23.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.587 --rc genhtml_branch_coverage=1 00:23:23.587 --rc genhtml_function_coverage=1 00:23:23.587 --rc genhtml_legend=1 00:23:23.587 --rc geninfo_all_blocks=1 00:23:23.587 --rc geninfo_unexecuted_blocks=1 00:23:23.587 00:23:23.587 ' 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:23.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.587 --rc genhtml_branch_coverage=1 00:23:23.587 --rc genhtml_function_coverage=1 00:23:23.587 --rc genhtml_legend=1 00:23:23.587 --rc geninfo_all_blocks=1 00:23:23.587 --rc geninfo_unexecuted_blocks=1 00:23:23.587 00:23:23.587 ' 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:23.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.587 --rc genhtml_branch_coverage=1 00:23:23.587 --rc genhtml_function_coverage=1 00:23:23.587 --rc genhtml_legend=1 00:23:23.587 --rc geninfo_all_blocks=1 00:23:23.587 --rc geninfo_unexecuted_blocks=1 00:23:23.587 00:23:23.587 ' 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:23.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.587 --rc genhtml_branch_coverage=1 00:23:23.587 --rc genhtml_function_coverage=1 00:23:23.587 --rc genhtml_legend=1 00:23:23.587 --rc geninfo_all_blocks=1 00:23:23.587 --rc geninfo_unexecuted_blocks=1 00:23:23.587 00:23:23.587 ' 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.587 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:23.588 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:23.847 Error setting digest 00:23:23.847 4082E0325E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:23.847 4082E0325E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:23.847 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.848 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:30.417 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:30.417 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.417 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:30.418 Found net devices under 0000:af:00.0: cvl_0_0 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:30.418 Found net devices under 0000:af:00.1: cvl_0_1 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:30.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:23:30.418 00:23:30.418 --- 10.0.0.2 ping statistics --- 00:23:30.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.418 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:23:30.418 00:23:30.418 --- 10.0.0.1 ping statistics --- 00:23:30.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.418 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=362262 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 362262 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 362262 ']' 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:30.418 [2024-12-13 05:39:29.778763] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:30.418 [2024-12-13 05:39:29.778811] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.418 [2024-12-13 05:39:29.853581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.418 [2024-12-13 05:39:29.874107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.418 [2024-12-13 05:39:29.874142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.418 [2024-12-13 05:39:29.874149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.418 [2024-12-13 05:39:29.874155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.418 [2024-12-13 05:39:29.874160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.418 [2024-12-13 05:39:29.874635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:30.418 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.yVv 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.yVv 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.yVv 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.yVv 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:30.418 [2024-12-13 05:39:30.188686] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.418 [2024-12-13 05:39:30.204690] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:30.418 [2024-12-13 05:39:30.204884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.418 malloc0 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=362294 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 362294 /var/tmp/bdevperf.sock 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 362294 ']' 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.418 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.419 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.419 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:30.419 [2024-12-13 05:39:30.332386] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:30.419 [2024-12-13 05:39:30.332434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362294 ] 00:23:30.419 [2024-12-13 05:39:30.407345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.419 [2024-12-13 05:39:30.429240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.677 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.677 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:30.677 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.yVv 00:23:30.936 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:30.936 [2024-12-13 05:39:30.908540] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.194 TLSTESTn1 00:23:31.194 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:31.194 Running I/O for 10 seconds... 00:23:33.510 5414.00 IOPS, 21.15 MiB/s [2024-12-13T04:39:34.461Z] 5508.50 IOPS, 21.52 MiB/s [2024-12-13T04:39:35.395Z] 5245.33 IOPS, 20.49 MiB/s [2024-12-13T04:39:36.330Z] 5214.50 IOPS, 20.37 MiB/s [2024-12-13T04:39:37.266Z] 5276.40 IOPS, 20.61 MiB/s [2024-12-13T04:39:38.202Z] 5275.83 IOPS, 20.61 MiB/s [2024-12-13T04:39:39.138Z] 5310.86 IOPS, 20.75 MiB/s [2024-12-13T04:39:40.514Z] 5290.50 IOPS, 20.67 MiB/s [2024-12-13T04:39:41.448Z] 5311.89 IOPS, 20.75 MiB/s [2024-12-13T04:39:41.448Z] 5302.50 IOPS, 20.71 MiB/s 00:23:41.433 Latency(us) 00:23:41.433 [2024-12-13T04:39:41.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.433 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:41.433 Verification LBA range: start 0x0 length 0x2000 00:23:41.433 TLSTESTn1 : 10.02 5301.95 20.71 0.00 0.00 24098.72 4774.77 33454.57 00:23:41.433 [2024-12-13T04:39:41.448Z] =================================================================================================================== 00:23:41.433 [2024-12-13T04:39:41.448Z] Total : 5301.95 20.71 0.00 0.00 24098.72 4774.77 33454.57 00:23:41.433 { 00:23:41.433 "results": [ 00:23:41.433 { 00:23:41.433 "job": "TLSTESTn1", 00:23:41.433 "core_mask": "0x4", 00:23:41.433 "workload": "verify", 00:23:41.433 "status": "finished", 00:23:41.433 "verify_range": { 00:23:41.433 "start": 0, 00:23:41.433 "length": 8192 00:23:41.433 }, 00:23:41.433 "queue_depth": 128, 00:23:41.433 "io_size": 4096, 00:23:41.433 "runtime": 10.024622, 00:23:41.433 "iops": 5301.945549667608, 00:23:41.433 "mibps": 20.710724803389095, 00:23:41.433 "io_failed": 0, 00:23:41.433 "io_timeout": 0, 00:23:41.433 "avg_latency_us": 24098.71836609775, 00:23:41.433 "min_latency_us": 4774.765714285714, 00:23:41.433 "max_latency_us": 33454.56761904762 00:23:41.433 } 00:23:41.433 ], 00:23:41.433 "core_count": 1 00:23:41.433 } 00:23:41.433 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:41.433 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:41.433 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:23:41.433 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:23:41.433 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:41.433 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:41.433 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:41.433 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:41.433 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:41.433 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:41.433 nvmf_trace.0 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 362294 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 362294 ']' 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 362294 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362294 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362294' 00:23:41.434 killing process with pid 362294 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 362294 00:23:41.434 Received shutdown signal, test time was about 10.000000 seconds 00:23:41.434 00:23:41.434 Latency(us) 00:23:41.434 [2024-12-13T04:39:41.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.434 [2024-12-13T04:39:41.449Z] =================================================================================================================== 00:23:41.434 [2024-12-13T04:39:41.449Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 362294 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.434 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.693 rmmod nvme_tcp 00:23:41.693 rmmod nvme_fabrics 00:23:41.693 rmmod nvme_keyring 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 362262 ']' 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 362262 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 362262 ']' 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 362262 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362262 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362262' 00:23:41.693 killing process with pid 362262 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 362262 00:23:41.693 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 362262 00:23:41.952 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:41.952 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:41.952 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:41.952 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:41.952 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:23:41.952 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:41.952 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:23:41.952 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.952 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.952 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.952 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.952 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.860 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:43.860 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.yVv 00:23:43.860 00:23:43.860 real 0m20.406s 00:23:43.860 user 0m21.576s 00:23:43.860 sys 0m9.274s 00:23:43.860 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.860 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:43.860 ************************************ 00:23:43.860 END TEST nvmf_fips 00:23:43.860 ************************************ 00:23:43.860 05:39:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:43.860 05:39:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:43.860 05:39:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.860 05:39:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:43.860 ************************************ 00:23:43.860 START TEST nvmf_control_msg_list 00:23:43.860 ************************************ 00:23:43.860 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:44.121 * Looking for test storage... 00:23:44.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:44.121 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:44.121 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:23:44.121 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.121 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:44.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.121 --rc genhtml_branch_coverage=1 00:23:44.121 --rc genhtml_function_coverage=1 00:23:44.121 --rc genhtml_legend=1 00:23:44.121 --rc geninfo_all_blocks=1 00:23:44.121 --rc geninfo_unexecuted_blocks=1 00:23:44.121 00:23:44.121 ' 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:44.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.122 --rc genhtml_branch_coverage=1 00:23:44.122 --rc genhtml_function_coverage=1 00:23:44.122 --rc genhtml_legend=1 00:23:44.122 --rc geninfo_all_blocks=1 00:23:44.122 --rc geninfo_unexecuted_blocks=1 00:23:44.122 00:23:44.122 ' 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:44.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.122 --rc genhtml_branch_coverage=1 00:23:44.122 --rc genhtml_function_coverage=1 00:23:44.122 --rc genhtml_legend=1 00:23:44.122 --rc geninfo_all_blocks=1 00:23:44.122 --rc geninfo_unexecuted_blocks=1 00:23:44.122 00:23:44.122 ' 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:44.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.122 --rc genhtml_branch_coverage=1 00:23:44.122 --rc genhtml_function_coverage=1 00:23:44.122 --rc genhtml_legend=1 00:23:44.122 --rc geninfo_all_blocks=1 00:23:44.122 --rc geninfo_unexecuted_blocks=1 00:23:44.122 00:23:44.122 ' 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.122 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.693 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.693 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.693 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.693 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.693 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.693 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.693 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.693 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.693 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.693 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:50.693 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:50.694 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:50.694 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:50.694 Found net devices under 0000:af:00.0: cvl_0_0 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:50.694 Found net devices under 0000:af:00.1: cvl_0_1 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:23:50.694 00:23:50.694 --- 10.0.0.2 ping statistics --- 00:23:50.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.694 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:23:50.694 00:23:50.694 --- 10.0.0.1 ping statistics --- 00:23:50.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.694 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.694 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.695 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:50.695 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.695 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.695 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.695 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=367544 00:23:50.695 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:50.695 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 367544 00:23:50.695 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 367544 ']' 00:23:50.695 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.695 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.695 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.695 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.695 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.695 [2024-12-13 05:39:49.969022] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:50.695 [2024-12-13 05:39:49.969064] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.695 [2024-12-13 05:39:50.048343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.695 [2024-12-13 05:39:50.070306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.695 [2024-12-13 05:39:50.070344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.695 [2024-12-13 05:39:50.070352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.695 [2024-12-13 05:39:50.070358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.695 [2024-12-13 05:39:50.070363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.695 [2024-12-13 05:39:50.070868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.695 [2024-12-13 05:39:50.214242] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.695 Malloc0 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.695 [2024-12-13 05:39:50.254616] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=367575 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=367577 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=367578 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:50.695 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 367575 00:23:50.695 [2024-12-13 05:39:50.343121] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:50.695 [2024-12-13 05:39:50.353232] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:50.695 [2024-12-13 05:39:50.353380] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:51.632 Initializing NVMe Controllers 00:23:51.632 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:51.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:51.632 Initialization complete. Launching workers. 00:23:51.632 ======================================================== 00:23:51.632 Latency(us) 00:23:51.632 Device Information : IOPS MiB/s Average min max 00:23:51.632 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40968.80 40601.77 41906.23 00:23:51.632 ======================================================== 00:23:51.632 Total : 25.00 0.10 40968.80 40601.77 41906.23 00:23:51.632 00:23:51.632 Initializing NVMe Controllers 00:23:51.632 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:51.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:51.632 Initialization complete. Launching workers. 00:23:51.632 ======================================================== 00:23:51.632 Latency(us) 00:23:51.632 Device Information : IOPS MiB/s Average min max 00:23:51.632 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6609.99 25.82 150.94 132.78 347.74 00:23:51.632 ======================================================== 00:23:51.632 Total : 6609.99 25.82 150.94 132.78 347.74 00:23:51.632 00:23:51.632 Initializing NVMe Controllers 00:23:51.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:51.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:51.633 Initialization complete. Launching workers. 00:23:51.633 ======================================================== 00:23:51.633 Latency(us) 00:23:51.633 Device Information : IOPS MiB/s Average min max 00:23:51.633 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6639.00 25.93 150.28 119.56 407.94 00:23:51.633 ======================================================== 00:23:51.633 Total : 6639.00 25.93 150.28 119.56 407.94 00:23:51.633 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 367577 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 367578 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.633 rmmod nvme_tcp 00:23:51.633 rmmod nvme_fabrics 00:23:51.633 rmmod nvme_keyring 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 367544 ']' 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 367544 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 367544 ']' 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 367544 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.633 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 367544 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 367544' 00:23:51.894 killing process with pid 367544 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 367544 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 367544 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.894 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.430 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.430 00:23:54.430 real 0m10.027s 00:23:54.430 user 0m6.647s 00:23:54.430 sys 0m5.421s 00:23:54.430 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.430 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:54.430 ************************************ 00:23:54.430 END TEST nvmf_control_msg_list 00:23:54.430 ************************************ 00:23:54.430 05:39:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:54.430 05:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.430 05:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.430 05:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:54.430 ************************************ 00:23:54.430 START TEST nvmf_wait_for_buf 00:23:54.430 ************************************ 00:23:54.430 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:54.430 * Looking for test storage... 00:23:54.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:54.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.430 --rc genhtml_branch_coverage=1 00:23:54.430 --rc genhtml_function_coverage=1 00:23:54.430 --rc genhtml_legend=1 00:23:54.430 --rc geninfo_all_blocks=1 00:23:54.430 --rc geninfo_unexecuted_blocks=1 00:23:54.430 00:23:54.430 ' 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:54.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.430 --rc genhtml_branch_coverage=1 00:23:54.430 --rc genhtml_function_coverage=1 00:23:54.430 --rc genhtml_legend=1 00:23:54.430 --rc geninfo_all_blocks=1 00:23:54.430 --rc geninfo_unexecuted_blocks=1 00:23:54.430 00:23:54.430 ' 00:23:54.430 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:54.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.430 --rc genhtml_branch_coverage=1 00:23:54.430 --rc genhtml_function_coverage=1 00:23:54.430 --rc genhtml_legend=1 00:23:54.430 --rc geninfo_all_blocks=1 00:23:54.431 --rc geninfo_unexecuted_blocks=1 00:23:54.431 00:23:54.431 ' 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:54.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.431 --rc genhtml_branch_coverage=1 00:23:54.431 --rc genhtml_function_coverage=1 00:23:54.431 --rc genhtml_legend=1 00:23:54.431 --rc geninfo_all_blocks=1 00:23:54.431 --rc geninfo_unexecuted_blocks=1 00:23:54.431 00:23:54.431 ' 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.431 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:01.000 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:01.000 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.000 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:01.001 Found net devices under 0000:af:00.0: cvl_0_0 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:01.001 Found net devices under 0000:af:00.1: cvl_0_1 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:01.001 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:01.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:24:01.001 00:24:01.001 --- 10.0.0.2 ping statistics --- 00:24:01.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.001 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:24:01.001 00:24:01.001 --- 10.0.0.1 ping statistics --- 00:24:01.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.001 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=371249 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 371249 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 371249 ']' 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.001 [2024-12-13 05:40:00.109120] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:01.001 [2024-12-13 05:40:00.109165] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.001 [2024-12-13 05:40:00.187234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.001 [2024-12-13 05:40:00.208805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.001 [2024-12-13 05:40:00.208843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.001 [2024-12-13 05:40:00.208851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.001 [2024-12-13 05:40:00.208857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.001 [2024-12-13 05:40:00.208863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.001 [2024-12-13 05:40:00.209352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.001 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.002 Malloc0 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.002 [2024-12-13 05:40:00.407411] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.002 [2024-12-13 05:40:00.435625] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.002 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:01.002 [2024-12-13 05:40:00.519533] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:01.937 Initializing NVMe Controllers 00:24:01.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:01.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:01.937 Initialization complete. Launching workers. 00:24:01.937 ======================================================== 00:24:01.937 Latency(us) 00:24:01.937 Device Information : IOPS MiB/s Average min max 00:24:01.937 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.92 15.99 32366.94 7271.98 63844.73 00:24:01.937 ======================================================== 00:24:01.937 Total : 127.92 15.99 32366.94 7271.98 63844.73 00:24:01.937 00:24:01.937 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:01.937 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:01.937 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.937 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:01.937 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.196 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:24:02.196 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:24:02.196 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:02.196 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:02.196 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:02.196 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:02.196 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:02.196 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:02.196 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:02.196 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:02.196 rmmod nvme_tcp 00:24:02.196 rmmod nvme_fabrics 00:24:02.196 rmmod nvme_keyring 00:24:02.196 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:02.196 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:02.196 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:02.197 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 371249 ']' 00:24:02.197 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 371249 00:24:02.197 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 371249 ']' 00:24:02.197 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 371249 00:24:02.197 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:02.197 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.197 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 371249 00:24:02.197 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:02.197 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:02.197 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 371249' 00:24:02.197 killing process with pid 371249 00:24:02.197 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 371249 00:24:02.197 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 371249 00:24:02.456 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:02.456 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:02.456 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:02.456 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:02.456 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:02.456 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:02.456 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:02.456 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:02.456 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:02.456 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.456 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.456 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.362 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:04.362 00:24:04.362 real 0m10.342s 00:24:04.362 user 0m3.933s 00:24:04.362 sys 0m4.862s 00:24:04.362 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.362 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:04.362 ************************************ 00:24:04.362 END TEST nvmf_wait_for_buf 00:24:04.362 ************************************ 00:24:04.362 05:40:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:04.362 05:40:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:04.362 05:40:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:04.362 05:40:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.362 05:40:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:04.622 ************************************ 00:24:04.622 START TEST nvmf_fuzz 00:24:04.622 ************************************ 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:04.622 * Looking for test storage... 00:24:04.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.622 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:04.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.623 --rc genhtml_branch_coverage=1 00:24:04.623 --rc genhtml_function_coverage=1 00:24:04.623 --rc genhtml_legend=1 00:24:04.623 --rc geninfo_all_blocks=1 00:24:04.623 --rc geninfo_unexecuted_blocks=1 00:24:04.623 00:24:04.623 ' 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:04.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.623 --rc genhtml_branch_coverage=1 00:24:04.623 --rc genhtml_function_coverage=1 00:24:04.623 --rc genhtml_legend=1 00:24:04.623 --rc geninfo_all_blocks=1 00:24:04.623 --rc geninfo_unexecuted_blocks=1 00:24:04.623 00:24:04.623 ' 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:04.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.623 --rc genhtml_branch_coverage=1 00:24:04.623 --rc genhtml_function_coverage=1 00:24:04.623 --rc genhtml_legend=1 00:24:04.623 --rc geninfo_all_blocks=1 00:24:04.623 --rc geninfo_unexecuted_blocks=1 00:24:04.623 00:24:04.623 ' 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:04.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.623 --rc genhtml_branch_coverage=1 00:24:04.623 --rc genhtml_function_coverage=1 00:24:04.623 --rc genhtml_legend=1 00:24:04.623 --rc geninfo_all_blocks=1 00:24:04.623 --rc geninfo_unexecuted_blocks=1 00:24:04.623 00:24:04.623 ' 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:04.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:04.623 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:10.168 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.168 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:10.169 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:10.169 Found net devices under 0000:af:00.0: cvl_0_0 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:10.169 Found net devices under 0000:af:00.1: cvl_0_1 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.169 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:24:10.428 00:24:10.428 --- 10.0.0.2 ping statistics --- 00:24:10.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.428 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:24:10.428 00:24:10.428 --- 10.0.0.1 ping statistics --- 00:24:10.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.428 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.428 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.687 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=375161 00:24:10.687 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:10.687 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:10.687 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 375161 00:24:10.687 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 375161 ']' 00:24:10.687 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.687 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.687 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.687 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.687 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:10.946 Malloc0 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:10.946 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:43.025 Fuzzing completed. Shutting down the fuzz application 00:24:43.025 00:24:43.025 Dumping successful admin opcodes: 00:24:43.025 9, 10, 00:24:43.025 Dumping successful io opcodes: 00:24:43.025 0, 9, 00:24:43.025 NS: 0x2000008eff00 I/O qp, Total commands completed: 1001104, total successful commands: 5862, random_seed: 1346692672 00:24:43.025 NS: 0x2000008eff00 admin qp, Total commands completed: 129392, total successful commands: 29, random_seed: 121988224 00:24:43.025 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:43.025 Fuzzing completed. Shutting down the fuzz application 00:24:43.025 00:24:43.025 Dumping successful admin opcodes: 00:24:43.025 00:24:43.025 Dumping successful io opcodes: 00:24:43.025 00:24:43.025 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 741875566 00:24:43.025 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 741943122 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.025 rmmod nvme_tcp 00:24:43.025 rmmod nvme_fabrics 00:24:43.025 rmmod nvme_keyring 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 375161 ']' 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 375161 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 375161 ']' 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 375161 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 375161 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 375161' 00:24:43.025 killing process with pid 375161 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 375161 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 375161 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.025 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.931 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:44.931 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:44.931 00:24:44.931 real 0m40.414s 00:24:44.931 user 0m53.968s 00:24:44.931 sys 0m15.445s 00:24:44.931 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.931 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.931 ************************************ 00:24:44.931 END TEST nvmf_fuzz 00:24:44.931 ************************************ 00:24:44.931 05:40:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:44.931 05:40:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:44.931 05:40:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.931 05:40:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:44.931 ************************************ 00:24:44.931 START TEST nvmf_multiconnection 00:24:44.931 ************************************ 00:24:44.931 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:45.191 * Looking for test storage... 00:24:45.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:45.191 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:45.191 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:24:45.191 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.191 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:45.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.192 --rc genhtml_branch_coverage=1 00:24:45.192 --rc genhtml_function_coverage=1 00:24:45.192 --rc genhtml_legend=1 00:24:45.192 --rc geninfo_all_blocks=1 00:24:45.192 --rc geninfo_unexecuted_blocks=1 00:24:45.192 00:24:45.192 ' 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:45.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.192 --rc genhtml_branch_coverage=1 00:24:45.192 --rc genhtml_function_coverage=1 00:24:45.192 --rc genhtml_legend=1 00:24:45.192 --rc geninfo_all_blocks=1 00:24:45.192 --rc geninfo_unexecuted_blocks=1 00:24:45.192 00:24:45.192 ' 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:45.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.192 --rc genhtml_branch_coverage=1 00:24:45.192 --rc genhtml_function_coverage=1 00:24:45.192 --rc genhtml_legend=1 00:24:45.192 --rc geninfo_all_blocks=1 00:24:45.192 --rc geninfo_unexecuted_blocks=1 00:24:45.192 00:24:45.192 ' 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:45.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.192 --rc genhtml_branch_coverage=1 00:24:45.192 --rc genhtml_function_coverage=1 00:24:45.192 --rc genhtml_legend=1 00:24:45.192 --rc geninfo_all_blocks=1 00:24:45.192 --rc geninfo_unexecuted_blocks=1 00:24:45.192 00:24:45.192 ' 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.192 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:51.782 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.782 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:51.783 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:51.783 Found net devices under 0000:af:00.0: cvl_0_0 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:51.783 Found net devices under 0000:af:00.1: cvl_0_1 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:51.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:24:51.783 00:24:51.783 --- 10.0.0.2 ping statistics --- 00:24:51.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.783 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:24:51.783 00:24:51.783 --- 10.0.0.1 ping statistics --- 00:24:51.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.783 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=383704 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 383704 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 383704 ']' 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.783 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.783 [2024-12-13 05:40:51.028802] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:51.783 [2024-12-13 05:40:51.028852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.783 [2024-12-13 05:40:51.108629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:51.783 [2024-12-13 05:40:51.133617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.784 [2024-12-13 05:40:51.133651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.784 [2024-12-13 05:40:51.133659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.784 [2024-12-13 05:40:51.133665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.784 [2024-12-13 05:40:51.133669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.784 [2024-12-13 05:40:51.134967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.784 [2024-12-13 05:40:51.135063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.784 [2024-12-13 05:40:51.135093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.784 [2024-12-13 05:40:51.135094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 [2024-12-13 05:40:51.268014] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 Malloc1 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 [2024-12-13 05:40:51.336334] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 Malloc2 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 Malloc3 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 Malloc4 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.784 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 Malloc5 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 Malloc6 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 Malloc7 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 Malloc8 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 Malloc9 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.785 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.786 Malloc10 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.786 Malloc11 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.786 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:52.045 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.045 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:52.983 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:52.983 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:52.983 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:52.983 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:52.983 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:55.519 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:55.519 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:55.519 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:24:55.519 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:55.519 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:55.519 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:55.519 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.519 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:56.457 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:56.457 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:56.457 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:56.457 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:56.457 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:58.363 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:58.363 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:58.364 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:24:58.364 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:58.364 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:58.364 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:58.364 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.364 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:59.744 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:59.744 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:59.744 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:59.744 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:59.744 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:01.649 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:01.649 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:01.649 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:01.649 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:01.649 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:01.649 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:01.649 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.649 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:03.028 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:03.028 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:03.028 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:03.028 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:03.028 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:04.941 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:04.941 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:04.941 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:04.941 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:04.941 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:04.941 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:04.941 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.941 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:06.320 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:06.320 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:06.320 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:06.320 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:06.320 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:08.227 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:08.227 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:08.227 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:08.227 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:08.227 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:08.227 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:08.227 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.227 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:09.605 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:09.605 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:09.605 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.605 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:09.605 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:11.511 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:11.511 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:11.511 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:11.511 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:11.511 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:11.511 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:11.511 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.511 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:12.890 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:12.890 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:12.890 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.890 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:12.890 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:14.798 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:14.798 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:14.798 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:14.798 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:14.798 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:14.798 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:14.798 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.798 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:16.176 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:16.176 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:16.176 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:16.176 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:16.176 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:18.083 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:18.083 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:18.083 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:18.083 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:18.083 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:18.083 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:18.083 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.083 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:19.462 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:19.462 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:19.462 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:19.462 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:19.462 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:21.370 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:21.370 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:21.370 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:21.370 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:21.370 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:21.370 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:21.370 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.370 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:22.750 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:22.750 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:22.750 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:22.750 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:22.750 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:24.658 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:24.658 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:24.658 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:24.658 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:24.658 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.658 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:24.658 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.658 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:26.565 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:26.565 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:26.565 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.565 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:26.565 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:28.471 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:28.471 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:28.471 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:28.471 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:28.471 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.471 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:28.471 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:28.471 [global] 00:25:28.471 thread=1 00:25:28.471 invalidate=1 00:25:28.471 rw=read 00:25:28.471 time_based=1 00:25:28.471 runtime=10 00:25:28.471 ioengine=libaio 00:25:28.471 direct=1 00:25:28.471 bs=262144 00:25:28.471 iodepth=64 00:25:28.471 norandommap=1 00:25:28.471 numjobs=1 00:25:28.471 00:25:28.471 [job0] 00:25:28.471 filename=/dev/nvme0n1 00:25:28.471 [job1] 00:25:28.471 filename=/dev/nvme10n1 00:25:28.471 [job2] 00:25:28.471 filename=/dev/nvme1n1 00:25:28.471 [job3] 00:25:28.471 filename=/dev/nvme2n1 00:25:28.471 [job4] 00:25:28.471 filename=/dev/nvme3n1 00:25:28.471 [job5] 00:25:28.471 filename=/dev/nvme4n1 00:25:28.471 [job6] 00:25:28.471 filename=/dev/nvme5n1 00:25:28.471 [job7] 00:25:28.471 filename=/dev/nvme6n1 00:25:28.471 [job8] 00:25:28.471 filename=/dev/nvme7n1 00:25:28.471 [job9] 00:25:28.471 filename=/dev/nvme8n1 00:25:28.471 [job10] 00:25:28.471 filename=/dev/nvme9n1 00:25:28.471 Could not set queue depth (nvme0n1) 00:25:28.471 Could not set queue depth (nvme10n1) 00:25:28.471 Could not set queue depth (nvme1n1) 00:25:28.471 Could not set queue depth (nvme2n1) 00:25:28.471 Could not set queue depth (nvme3n1) 00:25:28.471 Could not set queue depth (nvme4n1) 00:25:28.471 Could not set queue depth (nvme5n1) 00:25:28.471 Could not set queue depth (nvme6n1) 00:25:28.471 Could not set queue depth (nvme7n1) 00:25:28.471 Could not set queue depth (nvme8n1) 00:25:28.471 Could not set queue depth (nvme9n1) 00:25:28.730 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:28.731 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:28.731 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:28.731 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:28.731 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:28.731 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:28.731 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:28.731 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:28.731 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:28.731 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:28.731 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:28.731 fio-3.35 00:25:28.731 Starting 11 threads 00:25:40.945 00:25:40.945 job0: (groupid=0, jobs=1): err= 0: pid=390017: Fri Dec 13 05:41:39 2024 00:25:40.945 read: IOPS=277, BW=69.4MiB/s (72.8MB/s)(698MiB/10054msec) 00:25:40.945 slat (usec): min=11, max=154858, avg=3062.31, stdev=11239.37 00:25:40.945 clat (msec): min=17, max=763, avg=227.03, stdev=87.23 00:25:40.945 lat (msec): min=18, max=763, avg=230.09, stdev=88.05 00:25:40.945 clat percentiles (msec): 00:25:40.945 | 1.00th=[ 71], 5.00th=[ 120], 10.00th=[ 140], 20.00th=[ 165], 00:25:40.945 | 30.00th=[ 180], 40.00th=[ 192], 50.00th=[ 207], 60.00th=[ 239], 00:25:40.945 | 70.00th=[ 266], 80.00th=[ 292], 90.00th=[ 326], 95.00th=[ 342], 00:25:40.945 | 99.00th=[ 659], 99.50th=[ 701], 99.90th=[ 751], 99.95th=[ 751], 00:25:40.945 | 99.99th=[ 768] 00:25:40.945 bw ( KiB/s): min=42496, max=100864, per=10.35%, avg=69888.00, stdev=16015.65, samples=20 00:25:40.945 iops : min= 166, max= 394, avg=273.00, stdev=62.56, samples=20 00:25:40.945 lat (msec) : 20=0.04%, 50=0.72%, 100=1.83%, 250=61.58%, 500=34.59% 00:25:40.945 lat (msec) : 750=1.00%, 1000=0.25% 00:25:40.945 cpu : usr=0.11%, sys=1.22%, ctx=401, majf=0, minf=4097 00:25:40.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:25:40.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.945 issued rwts: total=2793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.945 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.945 job1: (groupid=0, jobs=1): err= 0: pid=390020: Fri Dec 13 05:41:39 2024 00:25:40.945 read: IOPS=334, BW=83.7MiB/s (87.8MB/s)(842MiB/10055msec) 00:25:40.945 slat (usec): min=14, max=105399, avg=2972.67, stdev=9863.42 00:25:40.945 clat (msec): min=16, max=366, avg=187.86, stdev=52.99 00:25:40.945 lat (msec): min=17, max=374, avg=190.83, stdev=53.73 00:25:40.945 clat percentiles (msec): 00:25:40.945 | 1.00th=[ 69], 5.00th=[ 91], 10.00th=[ 121], 20.00th=[ 150], 00:25:40.945 | 30.00th=[ 161], 40.00th=[ 174], 50.00th=[ 188], 60.00th=[ 203], 00:25:40.945 | 70.00th=[ 215], 80.00th=[ 230], 90.00th=[ 253], 95.00th=[ 275], 00:25:40.945 | 99.00th=[ 317], 99.50th=[ 330], 99.90th=[ 347], 99.95th=[ 347], 00:25:40.945 | 99.99th=[ 368] 00:25:40.945 bw ( KiB/s): min=54784, max=148992, per=12.53%, avg=84582.40, stdev=20307.89, samples=20 00:25:40.945 iops : min= 214, max= 582, avg=330.40, stdev=79.33, samples=20 00:25:40.945 lat (msec) : 20=0.21%, 50=0.33%, 100=6.89%, 250=82.01%, 500=10.57% 00:25:40.945 cpu : usr=0.15%, sys=1.38%, ctx=456, majf=0, minf=4097 00:25:40.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:40.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.945 issued rwts: total=3368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.945 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.945 job2: (groupid=0, jobs=1): err= 0: pid=390021: Fri Dec 13 05:41:39 2024 00:25:40.945 read: IOPS=478, BW=120MiB/s (125MB/s)(1210MiB/10122msec) 00:25:40.945 slat (usec): min=15, max=414444, avg=1246.65, stdev=10960.43 00:25:40.945 clat (usec): min=597, max=1010.9k, avg=132432.08, stdev=213638.50 00:25:40.945 lat (usec): min=624, max=1125.6k, avg=133678.73, stdev=215702.83 00:25:40.945 clat percentiles (usec): 00:25:40.945 | 1.00th=[ 1827], 5.00th=[ 3851], 10.00th=[ 9110], 00:25:40.945 | 20.00th=[ 19792], 30.00th=[ 36963], 40.00th=[ 62653], 00:25:40.945 | 50.00th=[ 68682], 60.00th=[ 73925], 70.00th=[ 81265], 00:25:40.946 | 80.00th=[ 113771], 90.00th=[ 346031], 95.00th=[ 759170], 00:25:40.946 | 99.00th=[ 935330], 99.50th=[ 977273], 99.90th=[1010828], 00:25:40.946 | 99.95th=[1010828], 99.99th=[1010828] 00:25:40.946 bw ( KiB/s): min=14848, max=248320, per=18.11%, avg=122265.60, stdev=80782.43, samples=20 00:25:40.946 iops : min= 58, max= 970, avg=477.60, stdev=315.56, samples=20 00:25:40.946 lat (usec) : 750=0.25%, 1000=0.23% 00:25:40.946 lat (msec) : 2=1.49%, 4=3.08%, 10=6.20%, 20=9.07%, 50=13.72% 00:25:40.946 lat (msec) : 100=39.21%, 250=15.19%, 500=2.69%, 750=3.45%, 1000=5.29% 00:25:40.946 lat (msec) : 2000=0.14% 00:25:40.946 cpu : usr=0.27%, sys=1.88%, ctx=2318, majf=0, minf=4097 00:25:40.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:40.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.946 issued rwts: total=4840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.946 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.946 job3: (groupid=0, jobs=1): err= 0: pid=390022: Fri Dec 13 05:41:39 2024 00:25:40.946 read: IOPS=104, BW=26.2MiB/s (27.5MB/s)(265MiB/10119msec) 00:25:40.946 slat (usec): min=21, max=274598, avg=9196.62, stdev=31332.20 00:25:40.946 clat (msec): min=92, max=918, avg=600.37, stdev=137.42 00:25:40.946 lat (msec): min=92, max=928, avg=609.56, stdev=140.02 00:25:40.946 clat percentiles (msec): 00:25:40.946 | 1.00th=[ 220], 5.00th=[ 330], 10.00th=[ 435], 20.00th=[ 498], 00:25:40.946 | 30.00th=[ 531], 40.00th=[ 575], 50.00th=[ 617], 60.00th=[ 651], 00:25:40.946 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 776], 95.00th=[ 793], 00:25:40.946 | 99.00th=[ 852], 99.50th=[ 894], 99.90th=[ 894], 99.95th=[ 919], 00:25:40.946 | 99.99th=[ 919] 00:25:40.946 bw ( KiB/s): min=14336, max=38400, per=3.78%, avg=25550.95, stdev=6466.20, samples=20 00:25:40.946 iops : min= 56, max= 150, avg=99.80, stdev=25.26, samples=20 00:25:40.946 lat (msec) : 100=0.66%, 250=0.85%, 500=21.58%, 750=63.62%, 1000=13.29% 00:25:40.946 cpu : usr=0.02%, sys=0.54%, ctx=129, majf=0, minf=4097 00:25:40.946 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.1% 00:25:40.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.946 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.946 issued rwts: total=1061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.946 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.946 job4: (groupid=0, jobs=1): err= 0: pid=390023: Fri Dec 13 05:41:39 2024 00:25:40.946 read: IOPS=170, BW=42.6MiB/s (44.6MB/s)(429MiB/10082msec) 00:25:40.946 slat (usec): min=10, max=217783, avg=2663.31, stdev=15184.72 00:25:40.946 clat (usec): min=1108, max=1029.1k, avg=372850.14, stdev=276152.03 00:25:40.946 lat (usec): min=1167, max=1029.2k, avg=375513.45, stdev=277889.46 00:25:40.946 clat percentiles (usec): 00:25:40.946 | 1.00th=[ 1647], 5.00th=[ 3064], 10.00th=[ 9372], 00:25:40.946 | 20.00th=[ 58459], 30.00th=[ 179307], 40.00th=[ 242222], 00:25:40.946 | 50.00th=[ 350225], 60.00th=[ 442500], 70.00th=[ 541066], 00:25:40.946 | 80.00th=[ 658506], 90.00th=[ 767558], 95.00th=[ 826278], 00:25:40.946 | 99.00th=[ 910164], 99.50th=[ 952108], 99.90th=[1027605], 00:25:40.946 | 99.95th=[1027605], 99.99th=[1027605] 00:25:40.946 bw ( KiB/s): min=13824, max=86528, per=6.26%, avg=42296.45, stdev=18454.15, samples=20 00:25:40.946 iops : min= 54, max= 338, avg=165.20, stdev=72.07, samples=20 00:25:40.946 lat (msec) : 2=2.91%, 4=5.19%, 10=2.33%, 20=4.43%, 50=3.26% 00:25:40.946 lat (msec) : 100=5.42%, 250=17.66%, 500=25.64%, 750=22.26%, 1000=10.72% 00:25:40.946 lat (msec) : 2000=0.17% 00:25:40.946 cpu : usr=0.06%, sys=0.74%, ctx=743, majf=0, minf=4097 00:25:40.946 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:25:40.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.946 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.946 issued rwts: total=1716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.946 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.946 job5: (groupid=0, jobs=1): err= 0: pid=390024: Fri Dec 13 05:41:39 2024 00:25:40.946 read: IOPS=121, BW=30.3MiB/s (31.8MB/s)(307MiB/10118msec) 00:25:40.946 slat (usec): min=21, max=208430, avg=8166.94, stdev=24800.52 00:25:40.946 clat (msec): min=90, max=801, avg=518.96, stdev=141.85 00:25:40.946 lat (msec): min=91, max=908, avg=527.13, stdev=144.17 00:25:40.946 clat percentiles (msec): 00:25:40.946 | 1.00th=[ 148], 5.00th=[ 255], 10.00th=[ 347], 20.00th=[ 397], 00:25:40.946 | 30.00th=[ 435], 40.00th=[ 481], 50.00th=[ 527], 60.00th=[ 567], 00:25:40.946 | 70.00th=[ 617], 80.00th=[ 642], 90.00th=[ 701], 95.00th=[ 726], 00:25:40.946 | 99.00th=[ 768], 99.50th=[ 785], 99.90th=[ 802], 99.95th=[ 802], 00:25:40.946 | 99.99th=[ 802] 00:25:40.946 bw ( KiB/s): min=19968, max=43008, per=4.41%, avg=29772.80, stdev=6655.53, samples=20 00:25:40.946 iops : min= 78, max= 168, avg=116.30, stdev=26.00, samples=20 00:25:40.946 lat (msec) : 100=0.73%, 250=3.26%, 500=40.75%, 750=52.24%, 1000=3.02% 00:25:40.946 cpu : usr=0.04%, sys=0.65%, ctx=167, majf=0, minf=4097 00:25:40.946 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.9% 00:25:40.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.946 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.946 issued rwts: total=1227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.946 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.946 job6: (groupid=0, jobs=1): err= 0: pid=390025: Fri Dec 13 05:41:39 2024 00:25:40.946 read: IOPS=377, BW=94.3MiB/s (98.9MB/s)(951MiB/10082msec) 00:25:40.946 slat (usec): min=21, max=144151, avg=2564.37, stdev=9315.85 00:25:40.946 clat (usec): min=1955, max=582006, avg=166942.09, stdev=106176.17 00:25:40.946 lat (msec): min=2, max=582, avg=169.51, stdev=107.73 00:25:40.946 clat percentiles (msec): 00:25:40.946 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 71], 20.00th=[ 79], 00:25:40.946 | 30.00th=[ 94], 40.00th=[ 121], 50.00th=[ 136], 60.00th=[ 167], 00:25:40.946 | 70.00th=[ 211], 80.00th=[ 245], 90.00th=[ 296], 95.00th=[ 393], 00:25:40.946 | 99.00th=[ 493], 99.50th=[ 502], 99.90th=[ 558], 99.95th=[ 584], 00:25:40.946 | 99.99th=[ 584] 00:25:40.946 bw ( KiB/s): min=36864, max=220160, per=14.18%, avg=95718.40, stdev=53703.25, samples=20 00:25:40.946 iops : min= 144, max= 860, avg=373.90, stdev=209.78, samples=20 00:25:40.946 lat (msec) : 2=0.03%, 4=0.05%, 10=1.05%, 20=0.95%, 50=4.29% 00:25:40.946 lat (msec) : 100=27.29%, 250=47.41%, 500=18.30%, 750=0.63% 00:25:40.946 cpu : usr=0.18%, sys=1.75%, ctx=837, majf=0, minf=4097 00:25:40.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:25:40.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.946 issued rwts: total=3803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.946 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.946 job7: (groupid=0, jobs=1): err= 0: pid=390026: Fri Dec 13 05:41:39 2024 00:25:40.946 read: IOPS=421, BW=105MiB/s (110MB/s)(1059MiB/10053msec) 00:25:40.946 slat (usec): min=12, max=133008, avg=1882.73, stdev=8179.40 00:25:40.946 clat (usec): min=735, max=1008.7k, avg=149914.71, stdev=129838.85 00:25:40.946 lat (usec): min=760, max=1008.7k, avg=151797.43, stdev=130858.18 00:25:40.946 clat percentiles (usec): 00:25:40.946 | 1.00th=[ 1012], 5.00th=[ 1844], 10.00th=[ 3294], 00:25:40.946 | 20.00th=[ 37487], 30.00th=[ 70779], 40.00th=[ 107480], 00:25:40.946 | 50.00th=[ 143655], 60.00th=[ 164627], 70.00th=[ 189793], 00:25:40.946 | 80.00th=[ 231736], 90.00th=[ 278922], 95.00th=[ 333448], 00:25:40.946 | 99.00th=[ 641729], 99.50th=[ 910164], 99.90th=[1010828], 00:25:40.946 | 99.95th=[1010828], 99.99th=[1010828] 00:25:40.946 bw ( KiB/s): min=44032, max=271360, per=15.81%, avg=106777.60, stdev=57993.67, samples=20 00:25:40.946 iops : min= 172, max= 1060, avg=417.10, stdev=226.54, samples=20 00:25:40.946 lat (usec) : 750=0.02%, 1000=0.71% 00:25:40.946 lat (msec) : 2=5.27%, 4=4.44%, 10=2.83%, 20=3.80%, 50=4.30% 00:25:40.946 lat (msec) : 100=17.17%, 250=45.30%, 500=14.45%, 750=1.02%, 1000=0.54% 00:25:40.946 lat (msec) : 2000=0.14% 00:25:40.946 cpu : usr=0.17%, sys=1.49%, ctx=1291, majf=0, minf=4097 00:25:40.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:40.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.946 issued rwts: total=4234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.946 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.946 job8: (groupid=0, jobs=1): err= 0: pid=390033: Fri Dec 13 05:41:39 2024 00:25:40.946 read: IOPS=119, BW=29.8MiB/s (31.3MB/s)(302MiB/10118msec) 00:25:40.946 slat (usec): min=20, max=319152, avg=8187.38, stdev=27899.76 00:25:40.946 clat (msec): min=75, max=856, avg=527.71, stdev=167.62 00:25:40.946 lat (msec): min=121, max=970, avg=535.89, stdev=170.59 00:25:40.946 clat percentiles (msec): 00:25:40.946 | 1.00th=[ 124], 5.00th=[ 234], 10.00th=[ 321], 20.00th=[ 380], 00:25:40.946 | 30.00th=[ 439], 40.00th=[ 489], 50.00th=[ 542], 60.00th=[ 609], 00:25:40.946 | 70.00th=[ 642], 80.00th=[ 684], 90.00th=[ 735], 95.00th=[ 760], 00:25:40.946 | 99.00th=[ 818], 99.50th=[ 818], 99.90th=[ 860], 99.95th=[ 860], 00:25:40.946 | 99.99th=[ 860] 00:25:40.946 bw ( KiB/s): min=15872, max=47198, per=4.33%, avg=29265.50, stdev=8850.49, samples=20 00:25:40.946 iops : min= 62, max= 184, avg=114.30, stdev=34.53, samples=20 00:25:40.946 lat (msec) : 100=0.08%, 250=7.13%, 500=35.46%, 750=50.95%, 1000=6.38% 00:25:40.946 cpu : usr=0.03%, sys=0.61%, ctx=164, majf=0, minf=4097 00:25:40.947 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.8% 00:25:40.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.947 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.947 issued rwts: total=1207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.947 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.947 job9: (groupid=0, jobs=1): err= 0: pid=390041: Fri Dec 13 05:41:39 2024 00:25:40.947 read: IOPS=116, BW=29.1MiB/s (30.5MB/s)(295MiB/10119msec) 00:25:40.947 slat (usec): min=15, max=499535, avg=8448.48, stdev=32883.69 00:25:40.947 clat (msec): min=14, max=1233, avg=540.20, stdev=195.93 00:25:40.947 lat (msec): min=14, max=1233, avg=548.65, stdev=199.24 00:25:40.947 clat percentiles (msec): 00:25:40.947 | 1.00th=[ 42], 5.00th=[ 161], 10.00th=[ 271], 20.00th=[ 384], 00:25:40.947 | 30.00th=[ 468], 40.00th=[ 518], 50.00th=[ 567], 60.00th=[ 617], 00:25:40.947 | 70.00th=[ 651], 80.00th=[ 709], 90.00th=[ 776], 95.00th=[ 810], 00:25:40.947 | 99.00th=[ 869], 99.50th=[ 902], 99.90th=[ 1234], 99.95th=[ 1234], 00:25:40.947 | 99.99th=[ 1234] 00:25:40.947 bw ( KiB/s): min=14848, max=48640, per=4.23%, avg=28569.60, stdev=8300.09, samples=20 00:25:40.947 iops : min= 58, max= 190, avg=111.60, stdev=32.42, samples=20 00:25:40.947 lat (msec) : 20=0.25%, 50=1.19%, 100=2.12%, 250=5.51%, 500=29.77% 00:25:40.947 lat (msec) : 750=49.36%, 1000=11.54%, 2000=0.25% 00:25:40.947 cpu : usr=0.08%, sys=0.54%, ctx=163, majf=0, minf=3722 00:25:40.947 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.7% 00:25:40.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.947 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.947 issued rwts: total=1179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.947 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.947 job10: (groupid=0, jobs=1): err= 0: pid=390049: Fri Dec 13 05:41:39 2024 00:25:40.947 read: IOPS=125, BW=31.4MiB/s (32.9MB/s)(318MiB/10119msec) 00:25:40.947 slat (usec): min=22, max=232156, avg=7874.54, stdev=23618.00 00:25:40.947 clat (msec): min=13, max=852, avg=501.46, stdev=160.96 00:25:40.947 lat (msec): min=14, max=874, avg=509.33, stdev=163.78 00:25:40.947 clat percentiles (msec): 00:25:40.947 | 1.00th=[ 30], 5.00th=[ 213], 10.00th=[ 321], 20.00th=[ 376], 00:25:40.947 | 30.00th=[ 422], 40.00th=[ 464], 50.00th=[ 506], 60.00th=[ 542], 00:25:40.947 | 70.00th=[ 600], 80.00th=[ 642], 90.00th=[ 709], 95.00th=[ 743], 00:25:40.947 | 99.00th=[ 810], 99.50th=[ 818], 99.90th=[ 852], 99.95th=[ 852], 00:25:40.947 | 99.99th=[ 852] 00:25:40.947 bw ( KiB/s): min=17920, max=44544, per=4.58%, avg=30899.20, stdev=8030.88, samples=20 00:25:40.947 iops : min= 70, max= 174, avg=120.70, stdev=31.37, samples=20 00:25:40.947 lat (msec) : 20=0.47%, 50=1.57%, 250=5.43%, 500=41.89%, 750=46.38% 00:25:40.947 lat (msec) : 1000=4.25% 00:25:40.947 cpu : usr=0.05%, sys=0.63%, ctx=178, majf=0, minf=4097 00:25:40.947 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:25:40.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.947 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.947 issued rwts: total=1270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.947 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.947 00:25:40.947 Run status group 0 (all jobs): 00:25:40.947 READ: bw=659MiB/s (691MB/s), 26.2MiB/s-120MiB/s (27.5MB/s-125MB/s), io=6675MiB (6999MB), run=10053-10122msec 00:25:40.947 00:25:40.947 Disk stats (read/write): 00:25:40.947 nvme0n1: ios=5430/0, merge=0/0, ticks=1240280/0, in_queue=1240280, util=97.29% 00:25:40.947 nvme10n1: ios=6554/0, merge=0/0, ticks=1236294/0, in_queue=1236294, util=97.46% 00:25:40.947 nvme1n1: ios=9468/0, merge=0/0, ticks=1236748/0, in_queue=1236748, util=97.74% 00:25:40.947 nvme2n1: ios=1995/0, merge=0/0, ticks=1221926/0, in_queue=1221926, util=97.86% 00:25:40.947 nvme3n1: ios=3277/0, merge=0/0, ticks=1219697/0, in_queue=1219697, util=97.92% 00:25:40.947 nvme4n1: ios=2333/0, merge=0/0, ticks=1224982/0, in_queue=1224982, util=98.27% 00:25:40.947 nvme5n1: ios=7442/0, merge=0/0, ticks=1204660/0, in_queue=1204660, util=98.43% 00:25:40.947 nvme6n1: ios=8288/0, merge=0/0, ticks=1239847/0, in_queue=1239847, util=98.56% 00:25:40.947 nvme7n1: ios=2266/0, merge=0/0, ticks=1221580/0, in_queue=1221580, util=98.94% 00:25:40.947 nvme8n1: ios=2230/0, merge=0/0, ticks=1224856/0, in_queue=1224856, util=99.14% 00:25:40.947 nvme9n1: ios=2389/0, merge=0/0, ticks=1219306/0, in_queue=1219306, util=99.28% 00:25:40.947 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:40.947 [global] 00:25:40.947 thread=1 00:25:40.947 invalidate=1 00:25:40.947 rw=randwrite 00:25:40.947 time_based=1 00:25:40.947 runtime=10 00:25:40.947 ioengine=libaio 00:25:40.947 direct=1 00:25:40.947 bs=262144 00:25:40.947 iodepth=64 00:25:40.947 norandommap=1 00:25:40.947 numjobs=1 00:25:40.947 00:25:40.947 [job0] 00:25:40.947 filename=/dev/nvme0n1 00:25:40.947 [job1] 00:25:40.947 filename=/dev/nvme10n1 00:25:40.947 [job2] 00:25:40.947 filename=/dev/nvme1n1 00:25:40.947 [job3] 00:25:40.947 filename=/dev/nvme2n1 00:25:40.947 [job4] 00:25:40.947 filename=/dev/nvme3n1 00:25:40.947 [job5] 00:25:40.947 filename=/dev/nvme4n1 00:25:40.947 [job6] 00:25:40.947 filename=/dev/nvme5n1 00:25:40.947 [job7] 00:25:40.947 filename=/dev/nvme6n1 00:25:40.947 [job8] 00:25:40.947 filename=/dev/nvme7n1 00:25:40.947 [job9] 00:25:40.947 filename=/dev/nvme8n1 00:25:40.947 [job10] 00:25:40.947 filename=/dev/nvme9n1 00:25:40.947 Could not set queue depth (nvme0n1) 00:25:40.947 Could not set queue depth (nvme10n1) 00:25:40.947 Could not set queue depth (nvme1n1) 00:25:40.947 Could not set queue depth (nvme2n1) 00:25:40.947 Could not set queue depth (nvme3n1) 00:25:40.947 Could not set queue depth (nvme4n1) 00:25:40.947 Could not set queue depth (nvme5n1) 00:25:40.947 Could not set queue depth (nvme6n1) 00:25:40.947 Could not set queue depth (nvme7n1) 00:25:40.947 Could not set queue depth (nvme8n1) 00:25:40.947 Could not set queue depth (nvme9n1) 00:25:40.947 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:40.947 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:40.947 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:40.947 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:40.947 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:40.947 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:40.947 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:40.947 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:40.947 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:40.947 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:40.947 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:40.947 fio-3.35 00:25:40.947 Starting 11 threads 00:25:50.927 00:25:50.927 job0: (groupid=0, jobs=1): err= 0: pid=391293: Fri Dec 13 05:41:50 2024 00:25:50.927 write: IOPS=374, BW=93.6MiB/s (98.1MB/s)(950MiB/10144msec); 0 zone resets 00:25:50.927 slat (usec): min=26, max=118205, avg=1885.56, stdev=6551.90 00:25:50.927 clat (usec): min=681, max=668116, avg=168963.23, stdev=155784.13 00:25:50.927 lat (usec): min=722, max=668179, avg=170848.79, stdev=157742.16 00:25:50.927 clat percentiles (usec): 00:25:50.927 | 1.00th=[ 1565], 5.00th=[ 5145], 10.00th=[ 9372], 20.00th=[ 20055], 00:25:50.927 | 30.00th=[ 38536], 40.00th=[ 80217], 50.00th=[152044], 60.00th=[168821], 00:25:50.927 | 70.00th=[208667], 80.00th=[295699], 90.00th=[429917], 95.00th=[471860], 00:25:50.927 | 99.00th=[583009], 99.50th=[599786], 99.90th=[641729], 99.95th=[641729], 00:25:50.927 | 99.99th=[666895] 00:25:50.927 bw ( KiB/s): min=28672, max=397082, per=9.28%, avg=95655.70, stdev=87433.76, samples=20 00:25:50.927 iops : min= 112, max= 1551, avg=373.65, stdev=341.52, samples=20 00:25:50.927 lat (usec) : 750=0.03%, 1000=0.21% 00:25:50.927 lat (msec) : 2=1.03%, 4=2.21%, 10=7.03%, 20=9.45%, 50=15.59% 00:25:50.927 lat (msec) : 100=5.98%, 250=32.62%, 500=22.56%, 750=3.29% 00:25:50.927 cpu : usr=0.83%, sys=1.19%, ctx=2551, majf=0, minf=1 00:25:50.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:25:50.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:50.927 issued rwts: total=0,3798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.927 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:50.927 job1: (groupid=0, jobs=1): err= 0: pid=391294: Fri Dec 13 05:41:50 2024 00:25:50.927 write: IOPS=445, BW=111MiB/s (117MB/s)(1132MiB/10155msec); 0 zone resets 00:25:50.927 slat (usec): min=24, max=50892, avg=1604.57, stdev=4738.71 00:25:50.927 clat (usec): min=830, max=696449, avg=141821.75, stdev=129951.22 00:25:50.927 lat (usec): min=861, max=696506, avg=143426.33, stdev=131218.98 00:25:50.927 clat percentiles (msec): 00:25:50.927 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 22], 20.00th=[ 44], 00:25:50.927 | 30.00th=[ 52], 40.00th=[ 72], 50.00th=[ 97], 60.00th=[ 116], 00:25:50.927 | 70.00th=[ 174], 80.00th=[ 251], 90.00th=[ 326], 95.00th=[ 414], 00:25:50.927 | 99.00th=[ 542], 99.50th=[ 567], 99.90th=[ 676], 99.95th=[ 676], 00:25:50.927 | 99.99th=[ 701] 00:25:50.927 bw ( KiB/s): min=22016, max=308736, per=11.09%, avg=114335.55, stdev=77321.94, samples=20 00:25:50.927 iops : min= 86, max= 1206, avg=446.60, stdev=302.06, samples=20 00:25:50.927 lat (usec) : 1000=0.15% 00:25:50.927 lat (msec) : 2=0.33%, 4=0.71%, 10=3.36%, 20=5.12%, 50=17.77% 00:25:50.927 lat (msec) : 100=26.92%, 250=25.48%, 500=17.35%, 750=2.80% 00:25:50.927 cpu : usr=1.03%, sys=1.27%, ctx=2307, majf=0, minf=1 00:25:50.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:50.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:50.927 issued rwts: total=0,4529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.927 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:50.927 job2: (groupid=0, jobs=1): err= 0: pid=391306: Fri Dec 13 05:41:50 2024 00:25:50.927 write: IOPS=510, BW=128MiB/s (134MB/s)(1294MiB/10146msec); 0 zone resets 00:25:50.927 slat (usec): min=26, max=137248, avg=1622.86, stdev=5580.54 00:25:50.927 clat (usec): min=894, max=633336, avg=123751.59, stdev=120351.08 00:25:50.927 lat (usec): min=940, max=633397, avg=125374.44, stdev=121934.49 00:25:50.927 clat percentiles (msec): 00:25:50.927 | 1.00th=[ 4], 5.00th=[ 21], 10.00th=[ 35], 20.00th=[ 40], 00:25:50.927 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 84], 00:25:50.927 | 70.00th=[ 123], 80.00th=[ 184], 90.00th=[ 305], 95.00th=[ 422], 00:25:50.927 | 99.00th=[ 575], 99.50th=[ 600], 99.90th=[ 634], 99.95th=[ 634], 00:25:50.927 | 99.99th=[ 634] 00:25:50.927 bw ( KiB/s): min=24576, max=398336, per=12.70%, avg=130892.80, stdev=102265.62, samples=20 00:25:50.927 iops : min= 96, max= 1556, avg=511.30, stdev=399.48, samples=20 00:25:50.927 lat (usec) : 1000=0.06% 00:25:50.927 lat (msec) : 2=0.19%, 4=0.97%, 10=1.41%, 20=2.24%, 50=20.55% 00:25:50.927 lat (msec) : 100=40.39%, 250=21.85%, 500=10.45%, 750=1.89% 00:25:50.927 cpu : usr=1.24%, sys=1.47%, ctx=2053, majf=0, minf=1 00:25:50.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:50.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:50.927 issued rwts: total=0,5177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.927 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:50.927 job3: (groupid=0, jobs=1): err= 0: pid=391307: Fri Dec 13 05:41:50 2024 00:25:50.927 write: IOPS=574, BW=144MiB/s (151MB/s)(1454MiB/10126msec); 0 zone resets 00:25:50.927 slat (usec): min=13, max=24286, avg=1443.14, stdev=3392.62 00:25:50.927 clat (usec): min=1324, max=579770, avg=109974.30, stdev=75594.75 00:25:50.927 lat (usec): min=1387, max=579814, avg=111417.44, stdev=76257.83 00:25:50.927 clat percentiles (msec): 00:25:50.927 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 61], 00:25:50.927 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 97], 00:25:50.928 | 70.00th=[ 114], 80.00th=[ 176], 90.00th=[ 220], 95.00th=[ 266], 00:25:50.928 | 99.00th=[ 338], 99.50th=[ 368], 99.90th=[ 550], 99.95th=[ 567], 00:25:50.928 | 99.99th=[ 584] 00:25:50.928 bw ( KiB/s): min=55808, max=311808, per=14.28%, avg=147231.75, stdev=75874.30, samples=20 00:25:50.928 iops : min= 218, max= 1218, avg=575.10, stdev=296.41, samples=20 00:25:50.928 lat (msec) : 2=0.05%, 4=0.17%, 10=1.63%, 20=1.94%, 50=13.83% 00:25:50.928 lat (msec) : 100=43.72%, 250=32.09%, 500=6.33%, 750=0.22% 00:25:50.928 cpu : usr=1.19%, sys=1.84%, ctx=2239, majf=0, minf=1 00:25:50.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:50.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:50.928 issued rwts: total=0,5814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:50.928 job4: (groupid=0, jobs=1): err= 0: pid=391308: Fri Dec 13 05:41:50 2024 00:25:50.928 write: IOPS=287, BW=71.8MiB/s (75.3MB/s)(733MiB/10208msec); 0 zone resets 00:25:50.928 slat (usec): min=23, max=111914, avg=2983.52, stdev=7769.78 00:25:50.928 clat (msec): min=3, max=599, avg=219.65, stdev=154.76 00:25:50.928 lat (msec): min=4, max=599, avg=222.63, stdev=156.93 00:25:50.928 clat percentiles (msec): 00:25:50.928 | 1.00th=[ 18], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 53], 00:25:50.928 | 30.00th=[ 103], 40.00th=[ 163], 50.00th=[ 178], 60.00th=[ 249], 00:25:50.928 | 70.00th=[ 313], 80.00th=[ 384], 90.00th=[ 443], 95.00th=[ 498], 00:25:50.928 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 592], 99.95th=[ 592], 00:25:50.928 | 99.99th=[ 600] 00:25:50.928 bw ( KiB/s): min=26624, max=263168, per=7.13%, avg=73451.30, stdev=65272.61, samples=20 00:25:50.928 iops : min= 104, max= 1028, avg=286.90, stdev=254.98, samples=20 00:25:50.928 lat (msec) : 4=0.03%, 10=0.44%, 20=1.94%, 50=16.02%, 100=11.39% 00:25:50.928 lat (msec) : 250=30.58%, 500=34.78%, 750=4.81% 00:25:50.928 cpu : usr=0.72%, sys=0.93%, ctx=1293, majf=0, minf=1 00:25:50.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:50.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:50.928 issued rwts: total=0,2933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:50.928 job5: (groupid=0, jobs=1): err= 0: pid=391309: Fri Dec 13 05:41:50 2024 00:25:50.928 write: IOPS=333, BW=83.3MiB/s (87.4MB/s)(849MiB/10190msec); 0 zone resets 00:25:50.928 slat (usec): min=22, max=82954, avg=1794.55, stdev=6307.98 00:25:50.928 clat (usec): min=1209, max=575672, avg=190081.11, stdev=148137.48 00:25:50.928 lat (usec): min=1539, max=575731, avg=191875.66, stdev=150121.36 00:25:50.928 clat percentiles (msec): 00:25:50.928 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 14], 20.00th=[ 51], 00:25:50.928 | 30.00th=[ 93], 40.00th=[ 120], 50.00th=[ 148], 60.00th=[ 199], 00:25:50.928 | 70.00th=[ 264], 80.00th=[ 330], 90.00th=[ 422], 95.00th=[ 468], 00:25:50.928 | 99.00th=[ 523], 99.50th=[ 542], 99.90th=[ 567], 99.95th=[ 575], 00:25:50.928 | 99.99th=[ 575] 00:25:50.928 bw ( KiB/s): min=28672, max=252416, per=8.28%, avg=85335.40, stdev=54803.03, samples=20 00:25:50.928 iops : min= 112, max= 986, avg=333.30, stdev=214.06, samples=20 00:25:50.928 lat (msec) : 2=0.12%, 4=1.41%, 10=5.45%, 20=6.48%, 50=6.48% 00:25:50.928 lat (msec) : 100=13.22%, 250=34.21%, 500=29.58%, 750=3.06% 00:25:50.928 cpu : usr=0.75%, sys=1.22%, ctx=2332, majf=0, minf=1 00:25:50.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:25:50.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:50.928 issued rwts: total=0,3397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:50.928 job6: (groupid=0, jobs=1): err= 0: pid=391310: Fri Dec 13 05:41:50 2024 00:25:50.928 write: IOPS=240, BW=60.0MiB/s (62.9MB/s)(609MiB/10143msec); 0 zone resets 00:25:50.928 slat (usec): min=33, max=107262, avg=3690.61, stdev=8603.76 00:25:50.928 clat (msec): min=6, max=641, avg=262.68, stdev=135.19 00:25:50.928 lat (msec): min=6, max=641, avg=266.37, stdev=137.07 00:25:50.928 clat percentiles (msec): 00:25:50.928 | 1.00th=[ 16], 5.00th=[ 120], 10.00th=[ 142], 20.00th=[ 155], 00:25:50.928 | 30.00th=[ 165], 40.00th=[ 176], 50.00th=[ 213], 60.00th=[ 264], 00:25:50.928 | 70.00th=[ 326], 80.00th=[ 414], 90.00th=[ 460], 95.00th=[ 506], 00:25:50.928 | 99.00th=[ 617], 99.50th=[ 625], 99.90th=[ 642], 99.95th=[ 642], 00:25:50.928 | 99.99th=[ 642] 00:25:50.928 bw ( KiB/s): min=24576, max=112128, per=5.89%, avg=60728.30, stdev=27765.87, samples=20 00:25:50.928 iops : min= 96, max= 438, avg=237.20, stdev=108.47, samples=20 00:25:50.928 lat (msec) : 10=0.21%, 20=1.44%, 50=1.19%, 100=1.27%, 250=53.88% 00:25:50.928 lat (msec) : 500=36.88%, 750=5.13% 00:25:50.928 cpu : usr=0.70%, sys=0.83%, ctx=885, majf=0, minf=1 00:25:50.928 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:25:50.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:50.928 issued rwts: total=0,2435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:50.928 job7: (groupid=0, jobs=1): err= 0: pid=391311: Fri Dec 13 05:41:50 2024 00:25:50.928 write: IOPS=262, BW=65.7MiB/s (68.9MB/s)(663MiB/10084msec); 0 zone resets 00:25:50.928 slat (usec): min=22, max=62820, avg=2934.38, stdev=7754.31 00:25:50.928 clat (usec): min=726, max=597080, avg=240323.01, stdev=142815.21 00:25:50.928 lat (usec): min=768, max=597134, avg=243257.39, stdev=144862.49 00:25:50.928 clat percentiles (usec): 00:25:50.928 | 1.00th=[ 1762], 5.00th=[ 15926], 10.00th=[ 50070], 20.00th=[103285], 00:25:50.928 | 30.00th=[135267], 40.00th=[185598], 50.00th=[240124], 60.00th=[283116], 00:25:50.928 | 70.00th=[312476], 80.00th=[379585], 90.00th=[442500], 95.00th=[484443], 00:25:50.928 | 99.00th=[534774], 99.50th=[557843], 99.90th=[591397], 99.95th=[591397], 00:25:50.928 | 99.99th=[599786] 00:25:50.928 bw ( KiB/s): min=28672, max=130308, per=6.43%, avg=66291.40, stdev=32387.62, samples=20 00:25:50.928 iops : min= 112, max= 509, avg=258.95, stdev=126.51, samples=20 00:25:50.928 lat (usec) : 750=0.11%, 1000=0.23% 00:25:50.928 lat (msec) : 2=0.98%, 4=2.60%, 10=0.45%, 20=0.90%, 50=4.64% 00:25:50.928 lat (msec) : 100=8.82%, 250=34.20%, 500=42.99%, 750=4.07% 00:25:50.928 cpu : usr=0.57%, sys=1.00%, ctx=1350, majf=0, minf=2 00:25:50.928 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:25:50.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:50.928 issued rwts: total=0,2652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:50.928 job8: (groupid=0, jobs=1): err= 0: pid=391314: Fri Dec 13 05:41:50 2024 00:25:50.928 write: IOPS=273, BW=68.3MiB/s (71.6MB/s)(696MiB/10190msec); 0 zone resets 00:25:50.928 slat (usec): min=28, max=116195, avg=2982.91, stdev=7785.11 00:25:50.928 clat (usec): min=1207, max=573230, avg=231150.13, stdev=148526.67 00:25:50.928 lat (usec): min=1261, max=573304, avg=234133.04, stdev=150467.71 00:25:50.928 clat percentiles (msec): 00:25:50.928 | 1.00th=[ 4], 5.00th=[ 21], 10.00th=[ 54], 20.00th=[ 100], 00:25:50.928 | 30.00th=[ 121], 40.00th=[ 150], 50.00th=[ 197], 60.00th=[ 271], 00:25:50.928 | 70.00th=[ 321], 80.00th=[ 397], 90.00th=[ 443], 95.00th=[ 498], 00:25:50.928 | 99.00th=[ 542], 99.50th=[ 550], 99.90th=[ 567], 99.95th=[ 567], 00:25:50.928 | 99.99th=[ 575] 00:25:50.928 bw ( KiB/s): min=30720, max=148480, per=6.75%, avg=69610.05, stdev=37569.62, samples=20 00:25:50.928 iops : min= 120, max= 580, avg=271.90, stdev=146.77, samples=20 00:25:50.928 lat (msec) : 2=0.11%, 4=1.22%, 10=2.87%, 20=0.79%, 50=4.60% 00:25:50.928 lat (msec) : 100=12.25%, 250=35.64%, 500=37.66%, 750=4.85% 00:25:50.928 cpu : usr=0.60%, sys=0.87%, ctx=1244, majf=0, minf=1 00:25:50.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:25:50.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:50.928 issued rwts: total=0,2783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:50.928 job9: (groupid=0, jobs=1): err= 0: pid=391316: Fri Dec 13 05:41:50 2024 00:25:50.928 write: IOPS=224, BW=56.1MiB/s (58.8MB/s)(571MiB/10174msec); 0 zone resets 00:25:50.928 slat (usec): min=27, max=79149, avg=4285.91, stdev=8850.21 00:25:50.928 clat (msec): min=18, max=609, avg=280.65, stdev=132.28 00:25:50.928 lat (msec): min=18, max=609, avg=284.94, stdev=134.03 00:25:50.928 clat percentiles (msec): 00:25:50.928 | 1.00th=[ 116], 5.00th=[ 140], 10.00th=[ 150], 20.00th=[ 159], 00:25:50.928 | 30.00th=[ 167], 40.00th=[ 190], 50.00th=[ 245], 60.00th=[ 300], 00:25:50.928 | 70.00th=[ 363], 80.00th=[ 418], 90.00th=[ 468], 95.00th=[ 542], 00:25:50.928 | 99.00th=[ 575], 99.50th=[ 592], 99.90th=[ 609], 99.95th=[ 609], 00:25:50.928 | 99.99th=[ 609] 00:25:50.928 bw ( KiB/s): min=26624, max=98304, per=5.52%, avg=56857.60, stdev=25598.64, samples=20 00:25:50.928 iops : min= 104, max= 384, avg=222.10, stdev=99.99, samples=20 00:25:50.928 lat (msec) : 20=0.04%, 50=0.18%, 100=0.53%, 250=50.04%, 500=41.59% 00:25:50.928 lat (msec) : 750=7.62% 00:25:50.928 cpu : usr=0.64%, sys=0.68%, ctx=595, majf=0, minf=1 00:25:50.928 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:25:50.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:50.928 issued rwts: total=0,2284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:50.928 job10: (groupid=0, jobs=1): err= 0: pid=391322: Fri Dec 13 05:41:50 2024 00:25:50.928 write: IOPS=520, BW=130MiB/s (136MB/s)(1324MiB/10181msec); 0 zone resets 00:25:50.928 slat (usec): min=21, max=63095, avg=1430.99, stdev=4160.18 00:25:50.928 clat (usec): min=1405, max=443026, avg=121471.40, stdev=98569.64 00:25:50.928 lat (usec): min=1459, max=443067, avg=122902.38, stdev=99550.97 00:25:50.928 clat percentiles (msec): 00:25:50.928 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 39], 00:25:50.928 | 30.00th=[ 43], 40.00th=[ 77], 50.00th=[ 101], 60.00th=[ 111], 00:25:50.928 | 70.00th=[ 163], 80.00th=[ 203], 90.00th=[ 284], 95.00th=[ 338], 00:25:50.928 | 99.00th=[ 380], 99.50th=[ 409], 99.90th=[ 435], 99.95th=[ 439], 00:25:50.928 | 99.99th=[ 443] 00:25:50.928 bw ( KiB/s): min=47104, max=340992, per=13.00%, avg=134003.15, stdev=79421.07, samples=20 00:25:50.928 iops : min= 184, max= 1332, avg=523.40, stdev=310.24, samples=20 00:25:50.928 lat (msec) : 2=0.02%, 4=0.23%, 10=4.27%, 20=6.10%, 50=22.82% 00:25:50.928 lat (msec) : 100=17.12%, 250=37.36%, 500=12.08% 00:25:50.928 cpu : usr=0.97%, sys=1.66%, ctx=2588, majf=0, minf=1 00:25:50.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:50.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:50.929 issued rwts: total=0,5297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.929 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:50.929 00:25:50.929 Run status group 0 (all jobs): 00:25:50.929 WRITE: bw=1007MiB/s (1055MB/s), 56.1MiB/s-144MiB/s (58.8MB/s-151MB/s), io=10.0GiB (10.8GB), run=10084-10208msec 00:25:50.929 00:25:50.929 Disk stats (read/write): 00:25:50.929 nvme0n1: ios=49/7539, merge=0/0, ticks=36/1236040, in_queue=1236076, util=94.50% 00:25:50.929 nvme10n1: ios=49/9022, merge=0/0, ticks=137/1229689, in_queue=1229826, util=95.45% 00:25:50.929 nvme1n1: ios=42/10295, merge=0/0, ticks=1494/1229680, in_queue=1231174, util=100.00% 00:25:50.929 nvme2n1: ios=0/11603, merge=0/0, ticks=0/1233406, in_queue=1233406, util=95.64% 00:25:50.929 nvme3n1: ios=0/5744, merge=0/0, ticks=0/1224460, in_queue=1224460, util=95.84% 00:25:50.929 nvme4n1: ios=0/6718, merge=0/0, ticks=0/1236466, in_queue=1236466, util=96.64% 00:25:50.929 nvme5n1: ios=41/4817, merge=0/0, ticks=1430/1228169, in_queue=1229599, util=99.88% 00:25:50.929 nvme6n1: ios=0/4951, merge=0/0, ticks=0/1197740, in_queue=1197740, util=97.25% 00:25:50.929 nvme7n1: ios=35/5489, merge=0/0, ticks=1187/1225544, in_queue=1226731, util=100.00% 00:25:50.929 nvme8n1: ios=0/4511, merge=0/0, ticks=0/1226054, in_queue=1226054, util=98.74% 00:25:50.929 nvme9n1: ios=33/10531, merge=0/0, ticks=1168/1223834, in_queue=1225002, util=99.88% 00:25:50.929 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:50.929 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:50.929 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.929 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:50.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:50.929 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:50.929 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:50.929 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:50.929 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:25:51.188 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:51.188 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:25:51.188 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:51.188 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:51.188 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.188 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.188 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.188 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.188 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:51.447 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:51.447 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:51.447 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:51.447 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:51.447 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:25:51.447 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:51.447 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:25:51.447 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:51.447 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:51.447 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.447 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.447 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.447 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.447 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:51.706 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:51.706 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:51.706 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:51.706 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:51.706 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:25:51.706 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:51.706 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:25:51.706 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:51.706 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:51.706 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.706 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.706 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.706 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.706 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:52.274 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:52.274 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:52.274 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:52.274 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:52.274 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:25:52.274 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:52.274 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:25:52.274 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:52.274 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:52.274 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.274 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.274 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.274 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.274 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:52.533 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:52.533 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:52.533 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:52.533 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:52.533 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:25:52.533 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:52.533 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:25:52.533 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:52.533 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:52.533 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.533 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.533 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.533 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.533 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:52.793 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:52.793 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:52.793 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:52.793 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:52.793 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:25:52.793 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:25:52.793 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:52.793 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:52.793 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:52.793 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.793 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.793 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.793 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.793 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:53.052 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:53.052 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:53.052 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:53.052 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:53.052 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:25:53.052 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:25:53.052 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:53.052 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:53.052 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:53.052 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.052 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.052 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.052 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.052 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:53.312 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:53.312 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:53.312 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:53.572 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.572 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:53.831 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:25:53.831 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:53.832 rmmod nvme_tcp 00:25:53.832 rmmod nvme_fabrics 00:25:53.832 rmmod nvme_keyring 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 383704 ']' 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 383704 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 383704 ']' 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 383704 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 383704 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 383704' 00:25:53.832 killing process with pid 383704 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 383704 00:25:53.832 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 383704 00:25:54.401 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:54.401 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:54.401 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:54.401 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:25:54.401 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:25:54.401 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:54.401 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:25:54.401 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.401 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.401 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.401 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.401 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.306 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:56.306 00:25:56.306 real 1m11.344s 00:25:56.306 user 4m19.356s 00:25:56.306 sys 0m16.522s 00:25:56.306 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.306 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.306 ************************************ 00:25:56.306 END TEST nvmf_multiconnection 00:25:56.306 ************************************ 00:25:56.306 05:41:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:56.307 05:41:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:56.307 05:41:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.307 05:41:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:56.307 ************************************ 00:25:56.307 START TEST nvmf_initiator_timeout 00:25:56.307 ************************************ 00:25:56.307 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:56.567 * Looking for test storage... 00:25:56.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:56.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.567 --rc genhtml_branch_coverage=1 00:25:56.567 --rc genhtml_function_coverage=1 00:25:56.567 --rc genhtml_legend=1 00:25:56.567 --rc geninfo_all_blocks=1 00:25:56.567 --rc geninfo_unexecuted_blocks=1 00:25:56.567 00:25:56.567 ' 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:56.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.567 --rc genhtml_branch_coverage=1 00:25:56.567 --rc genhtml_function_coverage=1 00:25:56.567 --rc genhtml_legend=1 00:25:56.567 --rc geninfo_all_blocks=1 00:25:56.567 --rc geninfo_unexecuted_blocks=1 00:25:56.567 00:25:56.567 ' 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:56.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.567 --rc genhtml_branch_coverage=1 00:25:56.567 --rc genhtml_function_coverage=1 00:25:56.567 --rc genhtml_legend=1 00:25:56.567 --rc geninfo_all_blocks=1 00:25:56.567 --rc geninfo_unexecuted_blocks=1 00:25:56.567 00:25:56.567 ' 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:56.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.567 --rc genhtml_branch_coverage=1 00:25:56.567 --rc genhtml_function_coverage=1 00:25:56.567 --rc genhtml_legend=1 00:25:56.567 --rc geninfo_all_blocks=1 00:25:56.567 --rc geninfo_unexecuted_blocks=1 00:25:56.567 00:25:56.567 ' 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.567 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:56.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:25:56.568 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.139 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:03.140 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:03.140 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:03.140 Found net devices under 0000:af:00.0: cvl_0_0 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:03.140 Found net devices under 0000:af:00.1: cvl_0_1 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:03.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:26:03.140 00:26:03.140 --- 10.0.0.2 ping statistics --- 00:26:03.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.140 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:26:03.140 00:26:03.140 --- 10.0.0.1 ping statistics --- 00:26:03.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.140 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:03.140 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=396762 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 396762 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 396762 ']' 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.141 [2024-12-13 05:42:02.565881] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:03.141 [2024-12-13 05:42:02.565926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.141 [2024-12-13 05:42:02.640742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:03.141 [2024-12-13 05:42:02.663713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.141 [2024-12-13 05:42:02.663750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.141 [2024-12-13 05:42:02.663758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.141 [2024-12-13 05:42:02.663764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.141 [2024-12-13 05:42:02.663769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.141 [2024-12-13 05:42:02.668468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.141 [2024-12-13 05:42:02.668496] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.141 [2024-12-13 05:42:02.668552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.141 [2024-12-13 05:42:02.668554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.141 Malloc0 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.141 Delay0 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.141 [2024-12-13 05:42:02.849204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.141 [2024-12-13 05:42:02.878421] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.141 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:04.079 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:04.079 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:04.079 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:04.079 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:04.079 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:06.613 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:06.613 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:06.613 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:06.613 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:06.613 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:06.613 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:06.613 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=397286 00:26:06.613 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:06.613 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:06.613 [global] 00:26:06.613 thread=1 00:26:06.613 invalidate=1 00:26:06.613 rw=write 00:26:06.613 time_based=1 00:26:06.613 runtime=60 00:26:06.613 ioengine=libaio 00:26:06.613 direct=1 00:26:06.613 bs=4096 00:26:06.613 iodepth=1 00:26:06.613 norandommap=0 00:26:06.613 numjobs=1 00:26:06.613 00:26:06.613 verify_dump=1 00:26:06.613 verify_backlog=512 00:26:06.613 verify_state_save=0 00:26:06.613 do_verify=1 00:26:06.613 verify=crc32c-intel 00:26:06.613 [job0] 00:26:06.613 filename=/dev/nvme0n1 00:26:06.613 Could not set queue depth (nvme0n1) 00:26:06.613 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:06.613 fio-3.35 00:26:06.613 Starting 1 thread 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.149 true 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.149 true 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.149 true 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.149 true 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.149 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.439 true 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.439 true 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.439 true 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.439 true 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:12.439 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 397286 00:27:08.678 00:27:08.678 job0: (groupid=0, jobs=1): err= 0: pid=397556: Fri Dec 13 05:43:06 2024 00:27:08.678 read: IOPS=42, BW=169KiB/s (173kB/s)(9.91MiB/60023msec) 00:27:08.678 slat (usec): min=7, max=10750, avg=15.05, stdev=213.35 00:27:08.678 clat (usec): min=193, max=41677k, avg=23426.39, stdev=827612.08 00:27:08.678 lat (usec): min=201, max=41677k, avg=23441.43, stdev=827612.25 00:27:08.678 clat percentiles (usec): 00:27:08.678 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 223], 00:27:08.678 | 20.00th=[ 231], 30.00th=[ 235], 40.00th=[ 239], 00:27:08.678 | 50.00th=[ 243], 60.00th=[ 249], 70.00th=[ 255], 00:27:08.678 | 80.00th=[ 265], 90.00th=[ 41157], 95.00th=[ 41157], 00:27:08.678 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42730], 00:27:08.678 | 99.95th=[ 44827], 99.99th=[17112761] 00:27:08.678 write: IOPS=42, BW=171KiB/s (175kB/s)(10.0MiB/60023msec); 0 zone resets 00:27:08.678 slat (usec): min=10, max=26991, avg=22.81, stdev=533.23 00:27:08.678 clat (usec): min=150, max=1401, avg=194.64, stdev=42.26 00:27:08.678 lat (usec): min=162, max=27277, avg=217.45, stdev=536.70 00:27:08.678 clat percentiles (usec): 00:27:08.678 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:27:08.678 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:27:08.678 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 245], 95.00th=[ 285], 00:27:08.678 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 326], 99.95th=[ 351], 00:27:08.678 | 99.99th=[ 1401] 00:27:08.678 bw ( KiB/s): min= 3480, max= 8496, per=100.00%, avg=5120.00, stdev=2283.45, samples=4 00:27:08.678 iops : min= 870, max= 2124, avg=1280.00, stdev=570.86, samples=4 00:27:08.678 lat (usec) : 250=76.69%, 500=14.95%, 750=0.04% 00:27:08.678 lat (msec) : 2=0.04%, 4=0.02%, 50=8.24%, >=2000=0.02% 00:27:08.678 cpu : usr=0.09%, sys=0.15%, ctx=5100, majf=0, minf=1 00:27:08.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:08.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.678 issued rwts: total=2536,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:08.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:08.678 00:27:08.678 Run status group 0 (all jobs): 00:27:08.678 READ: bw=169KiB/s (173kB/s), 169KiB/s-169KiB/s (173kB/s-173kB/s), io=9.91MiB (10.4MB), run=60023-60023msec 00:27:08.678 WRITE: bw=171KiB/s (175kB/s), 171KiB/s-171KiB/s (175kB/s-175kB/s), io=10.0MiB (10.5MB), run=60023-60023msec 00:27:08.678 00:27:08.678 Disk stats (read/write): 00:27:08.678 nvme0n1: ios=2586/2560, merge=0/0, ticks=18906/468, in_queue=19374, util=99.80% 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:08.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:08.678 nvmf hotplug test: fio successful as expected 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.678 rmmod nvme_tcp 00:27:08.678 rmmod nvme_fabrics 00:27:08.678 rmmod nvme_keyring 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 396762 ']' 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 396762 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 396762 ']' 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 396762 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396762 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396762' 00:27:08.678 killing process with pid 396762 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 396762 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 396762 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:08.678 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.678 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.678 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.678 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.678 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.247 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:09.247 00:27:09.247 real 1m12.780s 00:27:09.247 user 4m22.388s 00:27:09.247 sys 0m6.328s 00:27:09.247 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.247 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.247 ************************************ 00:27:09.247 END TEST nvmf_initiator_timeout 00:27:09.247 ************************************ 00:27:09.247 05:43:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:09.247 05:43:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:09.247 05:43:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:09.247 05:43:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:09.247 05:43:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:15.819 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:15.819 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:15.819 Found net devices under 0000:af:00.0: cvl_0_0 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:15.819 Found net devices under 0000:af:00.1: cvl_0_1 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:15.819 05:43:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:15.820 ************************************ 00:27:15.820 START TEST nvmf_perf_adq 00:27:15.820 ************************************ 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:15.820 * Looking for test storage... 00:27:15.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:15.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.820 --rc genhtml_branch_coverage=1 00:27:15.820 --rc genhtml_function_coverage=1 00:27:15.820 --rc genhtml_legend=1 00:27:15.820 --rc geninfo_all_blocks=1 00:27:15.820 --rc geninfo_unexecuted_blocks=1 00:27:15.820 00:27:15.820 ' 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:15.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.820 --rc genhtml_branch_coverage=1 00:27:15.820 --rc genhtml_function_coverage=1 00:27:15.820 --rc genhtml_legend=1 00:27:15.820 --rc geninfo_all_blocks=1 00:27:15.820 --rc geninfo_unexecuted_blocks=1 00:27:15.820 00:27:15.820 ' 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:15.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.820 --rc genhtml_branch_coverage=1 00:27:15.820 --rc genhtml_function_coverage=1 00:27:15.820 --rc genhtml_legend=1 00:27:15.820 --rc geninfo_all_blocks=1 00:27:15.820 --rc geninfo_unexecuted_blocks=1 00:27:15.820 00:27:15.820 ' 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:15.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.820 --rc genhtml_branch_coverage=1 00:27:15.820 --rc genhtml_function_coverage=1 00:27:15.820 --rc genhtml_legend=1 00:27:15.820 --rc geninfo_all_blocks=1 00:27:15.820 --rc geninfo_unexecuted_blocks=1 00:27:15.820 00:27:15.820 ' 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:15.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:15.820 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:15.821 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:15.821 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:15.821 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:21.100 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:21.100 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:21.100 Found net devices under 0000:af:00.0: cvl_0_0 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:21.100 Found net devices under 0000:af:00.1: cvl_0_1 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:21.100 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:21.670 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:25.867 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:31.145 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:31.145 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:31.145 Found net devices under 0000:af:00.0: cvl_0_0 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:31.145 Found net devices under 0000:af:00.1: cvl_0_1 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.145 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:31.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:27:31.146 00:27:31.146 --- 10.0.0.2 ping statistics --- 00:27:31.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.146 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:27:31.146 00:27:31.146 --- 10.0.0.1 ping statistics --- 00:27:31.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.146 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=415710 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 415710 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 415710 ']' 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.146 [2024-12-13 05:43:30.563164] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:31.146 [2024-12-13 05:43:30.563204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.146 [2024-12-13 05:43:30.638432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:31.146 [2024-12-13 05:43:30.661349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.146 [2024-12-13 05:43:30.661384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.146 [2024-12-13 05:43:30.661391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.146 [2024-12-13 05:43:30.661397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.146 [2024-12-13 05:43:30.661402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.146 [2024-12-13 05:43:30.662820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.146 [2024-12-13 05:43:30.662926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.146 [2024-12-13 05:43:30.663045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.146 [2024-12-13 05:43:30.663045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.146 [2024-12-13 05:43:30.883164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.146 Malloc1 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.146 [2024-12-13 05:43:30.943437] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=415745 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:31.146 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:33.053 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:33.053 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.053 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.053 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.053 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:33.053 "tick_rate": 2100000000, 00:27:33.053 "poll_groups": [ 00:27:33.053 { 00:27:33.053 "name": "nvmf_tgt_poll_group_000", 00:27:33.053 "admin_qpairs": 1, 00:27:33.053 "io_qpairs": 1, 00:27:33.053 "current_admin_qpairs": 1, 00:27:33.053 "current_io_qpairs": 1, 00:27:33.053 "pending_bdev_io": 0, 00:27:33.053 "completed_nvme_io": 19810, 00:27:33.053 "transports": [ 00:27:33.053 { 00:27:33.053 "trtype": "TCP" 00:27:33.053 } 00:27:33.053 ] 00:27:33.053 }, 00:27:33.053 { 00:27:33.053 "name": "nvmf_tgt_poll_group_001", 00:27:33.053 "admin_qpairs": 0, 00:27:33.053 "io_qpairs": 1, 00:27:33.053 "current_admin_qpairs": 0, 00:27:33.053 "current_io_qpairs": 1, 00:27:33.053 "pending_bdev_io": 0, 00:27:33.053 "completed_nvme_io": 19861, 00:27:33.053 "transports": [ 00:27:33.053 { 00:27:33.053 "trtype": "TCP" 00:27:33.053 } 00:27:33.053 ] 00:27:33.053 }, 00:27:33.053 { 00:27:33.053 "name": "nvmf_tgt_poll_group_002", 00:27:33.053 "admin_qpairs": 0, 00:27:33.053 "io_qpairs": 1, 00:27:33.053 "current_admin_qpairs": 0, 00:27:33.053 "current_io_qpairs": 1, 00:27:33.053 "pending_bdev_io": 0, 00:27:33.053 "completed_nvme_io": 20166, 00:27:33.053 "transports": [ 00:27:33.053 { 00:27:33.053 "trtype": "TCP" 00:27:33.053 } 00:27:33.053 ] 00:27:33.053 }, 00:27:33.053 { 00:27:33.053 "name": "nvmf_tgt_poll_group_003", 00:27:33.053 "admin_qpairs": 0, 00:27:33.053 "io_qpairs": 1, 00:27:33.053 "current_admin_qpairs": 0, 00:27:33.053 "current_io_qpairs": 1, 00:27:33.053 "pending_bdev_io": 0, 00:27:33.053 "completed_nvme_io": 19862, 00:27:33.053 "transports": [ 00:27:33.053 { 00:27:33.053 "trtype": "TCP" 00:27:33.053 } 00:27:33.053 ] 00:27:33.053 } 00:27:33.053 ] 00:27:33.053 }' 00:27:33.053 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:33.053 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:33.053 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:33.053 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:33.053 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 415745 00:27:41.183 Initializing NVMe Controllers 00:27:41.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:41.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:41.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:41.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:41.183 Initialization complete. Launching workers. 00:27:41.183 ======================================================== 00:27:41.183 Latency(us) 00:27:41.183 Device Information : IOPS MiB/s Average min max 00:27:41.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10219.90 39.92 6263.56 2080.56 10358.56 00:27:41.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10300.90 40.24 6214.46 2287.25 12832.24 00:27:41.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10488.70 40.97 6103.19 2344.72 10859.87 00:27:41.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10375.80 40.53 6169.85 2153.56 11309.53 00:27:41.183 ======================================================== 00:27:41.183 Total : 41385.29 161.66 6187.20 2080.56 12832.24 00:27:41.183 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.183 rmmod nvme_tcp 00:27:41.183 rmmod nvme_fabrics 00:27:41.183 rmmod nvme_keyring 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 415710 ']' 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 415710 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 415710 ']' 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 415710 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 415710 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 415710' 00:27:41.183 killing process with pid 415710 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 415710 00:27:41.183 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 415710 00:27:41.452 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:41.452 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:41.452 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:41.452 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:41.452 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:41.452 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:41.452 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:41.452 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:41.452 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:41.452 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.452 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.452 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.426 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:43.426 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:43.426 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:43.696 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:45.242 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:47.346 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:52.620 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:52.620 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:52.620 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.620 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:52.621 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:52.621 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:52.621 Found net devices under 0000:af:00.0: cvl_0_0 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:52.621 Found net devices under 0000:af:00.1: cvl_0_1 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:52.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:27:52.621 00:27:52.621 --- 10.0.0.2 ping statistics --- 00:27:52.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.621 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:27:52.621 00:27:52.621 --- 10.0.0.1 ping statistics --- 00:27:52.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.621 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:52.621 net.core.busy_poll = 1 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:52.621 net.core.busy_read = 1 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:52.621 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=419576 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 419576 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 419576 ']' 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:52.884 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.884 [2024-12-13 05:43:52.750177] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:52.884 [2024-12-13 05:43:52.750221] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.884 [2024-12-13 05:43:52.826060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.884 [2024-12-13 05:43:52.849616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.884 [2024-12-13 05:43:52.849655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.884 [2024-12-13 05:43:52.849662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.884 [2024-12-13 05:43:52.849669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.884 [2024-12-13 05:43:52.849674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.884 [2024-12-13 05:43:52.853471] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.884 [2024-12-13 05:43:52.853495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.884 [2024-12-13 05:43:52.853600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.884 [2024-12-13 05:43:52.853601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.143 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.143 [2024-12-13 05:43:53.082373] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.143 Malloc1 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.143 [2024-12-13 05:43:53.146780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=419811 00:27:53.143 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:53.144 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:55.677 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:27:55.677 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.677 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.677 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.677 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:27:55.677 "tick_rate": 2100000000, 00:27:55.677 "poll_groups": [ 00:27:55.677 { 00:27:55.677 "name": "nvmf_tgt_poll_group_000", 00:27:55.677 "admin_qpairs": 1, 00:27:55.677 "io_qpairs": 4, 00:27:55.677 "current_admin_qpairs": 1, 00:27:55.677 "current_io_qpairs": 4, 00:27:55.677 "pending_bdev_io": 0, 00:27:55.677 "completed_nvme_io": 40646, 00:27:55.677 "transports": [ 00:27:55.677 { 00:27:55.677 "trtype": "TCP" 00:27:55.677 } 00:27:55.677 ] 00:27:55.677 }, 00:27:55.677 { 00:27:55.677 "name": "nvmf_tgt_poll_group_001", 00:27:55.677 "admin_qpairs": 0, 00:27:55.677 "io_qpairs": 0, 00:27:55.677 "current_admin_qpairs": 0, 00:27:55.677 "current_io_qpairs": 0, 00:27:55.677 "pending_bdev_io": 0, 00:27:55.677 "completed_nvme_io": 0, 00:27:55.677 "transports": [ 00:27:55.677 { 00:27:55.677 "trtype": "TCP" 00:27:55.677 } 00:27:55.677 ] 00:27:55.677 }, 00:27:55.677 { 00:27:55.677 "name": "nvmf_tgt_poll_group_002", 00:27:55.677 "admin_qpairs": 0, 00:27:55.677 "io_qpairs": 0, 00:27:55.677 "current_admin_qpairs": 0, 00:27:55.677 "current_io_qpairs": 0, 00:27:55.677 "pending_bdev_io": 0, 00:27:55.677 "completed_nvme_io": 0, 00:27:55.677 "transports": [ 00:27:55.677 { 00:27:55.677 "trtype": "TCP" 00:27:55.677 } 00:27:55.677 ] 00:27:55.677 }, 00:27:55.677 { 00:27:55.677 "name": "nvmf_tgt_poll_group_003", 00:27:55.677 "admin_qpairs": 0, 00:27:55.677 "io_qpairs": 0, 00:27:55.677 "current_admin_qpairs": 0, 00:27:55.677 "current_io_qpairs": 0, 00:27:55.677 "pending_bdev_io": 0, 00:27:55.677 "completed_nvme_io": 0, 00:27:55.677 "transports": [ 00:27:55.677 { 00:27:55.677 "trtype": "TCP" 00:27:55.677 } 00:27:55.677 ] 00:27:55.677 } 00:27:55.677 ] 00:27:55.677 }' 00:27:55.677 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:55.677 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:27:55.677 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:27:55.677 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:27:55.677 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 419811 00:28:03.797 Initializing NVMe Controllers 00:28:03.797 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:03.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:03.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:03.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:03.797 Initialization complete. Launching workers. 00:28:03.797 ======================================================== 00:28:03.797 Latency(us) 00:28:03.797 Device Information : IOPS MiB/s Average min max 00:28:03.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6353.70 24.82 10106.28 1188.31 57523.87 00:28:03.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5796.30 22.64 11040.22 1454.31 56795.35 00:28:03.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5928.10 23.16 10814.35 1463.61 58211.29 00:28:03.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5071.60 19.81 12617.43 1324.36 56731.57 00:28:03.797 ======================================================== 00:28:03.797 Total : 23149.70 90.43 11071.58 1188.31 58211.29 00:28:03.797 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:03.797 rmmod nvme_tcp 00:28:03.797 rmmod nvme_fabrics 00:28:03.797 rmmod nvme_keyring 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 419576 ']' 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 419576 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 419576 ']' 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 419576 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 419576 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 419576' 00:28:03.797 killing process with pid 419576 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 419576 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 419576 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.797 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.088 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:07.088 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:07.088 00:28:07.088 real 0m52.183s 00:28:07.088 user 2m44.831s 00:28:07.088 sys 0m10.914s 00:28:07.088 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.088 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:07.088 ************************************ 00:28:07.088 END TEST nvmf_perf_adq 00:28:07.088 ************************************ 00:28:07.088 05:44:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:07.088 05:44:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:07.088 05:44:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.088 05:44:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:07.088 ************************************ 00:28:07.088 START TEST nvmf_shutdown 00:28:07.088 ************************************ 00:28:07.088 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:07.088 * Looking for test storage... 00:28:07.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:07.088 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:07.088 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:07.088 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:07.347 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:07.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.348 --rc genhtml_branch_coverage=1 00:28:07.348 --rc genhtml_function_coverage=1 00:28:07.348 --rc genhtml_legend=1 00:28:07.348 --rc geninfo_all_blocks=1 00:28:07.348 --rc geninfo_unexecuted_blocks=1 00:28:07.348 00:28:07.348 ' 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:07.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.348 --rc genhtml_branch_coverage=1 00:28:07.348 --rc genhtml_function_coverage=1 00:28:07.348 --rc genhtml_legend=1 00:28:07.348 --rc geninfo_all_blocks=1 00:28:07.348 --rc geninfo_unexecuted_blocks=1 00:28:07.348 00:28:07.348 ' 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:07.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.348 --rc genhtml_branch_coverage=1 00:28:07.348 --rc genhtml_function_coverage=1 00:28:07.348 --rc genhtml_legend=1 00:28:07.348 --rc geninfo_all_blocks=1 00:28:07.348 --rc geninfo_unexecuted_blocks=1 00:28:07.348 00:28:07.348 ' 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:07.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.348 --rc genhtml_branch_coverage=1 00:28:07.348 --rc genhtml_function_coverage=1 00:28:07.348 --rc genhtml_legend=1 00:28:07.348 --rc geninfo_all_blocks=1 00:28:07.348 --rc geninfo_unexecuted_blocks=1 00:28:07.348 00:28:07.348 ' 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:07.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:07.348 ************************************ 00:28:07.348 START TEST nvmf_shutdown_tc1 00:28:07.348 ************************************ 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:07.348 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.918 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:13.919 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:13.919 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:13.919 Found net devices under 0000:af:00.0: cvl_0_0 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:13.919 Found net devices under 0000:af:00.1: cvl_0_1 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:13.919 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:13.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:28:13.919 00:28:13.919 --- 10.0.0.2 ping statistics --- 00:28:13.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.919 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:13.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:28:13.919 00:28:13.919 --- 10.0.0.1 ping statistics --- 00:28:13.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.919 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=425142 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 425142 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 425142 ']' 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.919 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:13.919 [2024-12-13 05:44:13.155470] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:13.919 [2024-12-13 05:44:13.155516] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.919 [2024-12-13 05:44:13.214961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:13.919 [2024-12-13 05:44:13.239549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.919 [2024-12-13 05:44:13.239583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.919 [2024-12-13 05:44:13.239590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.919 [2024-12-13 05:44:13.239595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.919 [2024-12-13 05:44:13.239600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.919 [2024-12-13 05:44:13.240903] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.919 [2024-12-13 05:44:13.241010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.920 [2024-12-13 05:44:13.241117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.920 [2024-12-13 05:44:13.241118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:13.920 [2024-12-13 05:44:13.373351] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:13.920 Malloc1 00:28:13.920 [2024-12-13 05:44:13.483066] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.920 Malloc2 00:28:13.920 Malloc3 00:28:13.920 Malloc4 00:28:13.920 Malloc5 00:28:13.920 Malloc6 00:28:13.920 Malloc7 00:28:13.920 Malloc8 00:28:13.920 Malloc9 00:28:13.920 Malloc10 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=425206 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 425206 /var/tmp/bdevperf.sock 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 425206 ']' 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:13.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.920 { 00:28:13.920 "params": { 00:28:13.920 "name": "Nvme$subsystem", 00:28:13.920 "trtype": "$TEST_TRANSPORT", 00:28:13.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.920 "adrfam": "ipv4", 00:28:13.920 "trsvcid": "$NVMF_PORT", 00:28:13.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.920 "hdgst": ${hdgst:-false}, 00:28:13.920 "ddgst": ${ddgst:-false} 00:28:13.920 }, 00:28:13.920 "method": "bdev_nvme_attach_controller" 00:28:13.920 } 00:28:13.920 EOF 00:28:13.920 )") 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.920 { 00:28:13.920 "params": { 00:28:13.920 "name": "Nvme$subsystem", 00:28:13.920 "trtype": "$TEST_TRANSPORT", 00:28:13.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.920 "adrfam": "ipv4", 00:28:13.920 "trsvcid": "$NVMF_PORT", 00:28:13.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.920 "hdgst": ${hdgst:-false}, 00:28:13.920 "ddgst": ${ddgst:-false} 00:28:13.920 }, 00:28:13.920 "method": "bdev_nvme_attach_controller" 00:28:13.920 } 00:28:13.920 EOF 00:28:13.920 )") 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.920 { 00:28:13.920 "params": { 00:28:13.920 "name": "Nvme$subsystem", 00:28:13.920 "trtype": "$TEST_TRANSPORT", 00:28:13.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.920 "adrfam": "ipv4", 00:28:13.920 "trsvcid": "$NVMF_PORT", 00:28:13.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.920 "hdgst": ${hdgst:-false}, 00:28:13.920 "ddgst": ${ddgst:-false} 00:28:13.920 }, 00:28:13.920 "method": "bdev_nvme_attach_controller" 00:28:13.920 } 00:28:13.920 EOF 00:28:13.920 )") 00:28:13.920 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:14.180 { 00:28:14.180 "params": { 00:28:14.180 "name": "Nvme$subsystem", 00:28:14.180 "trtype": "$TEST_TRANSPORT", 00:28:14.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.180 "adrfam": "ipv4", 00:28:14.180 "trsvcid": "$NVMF_PORT", 00:28:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.180 "hdgst": ${hdgst:-false}, 00:28:14.180 "ddgst": ${ddgst:-false} 00:28:14.180 }, 00:28:14.180 "method": "bdev_nvme_attach_controller" 00:28:14.180 } 00:28:14.180 EOF 00:28:14.180 )") 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:14.180 { 00:28:14.180 "params": { 00:28:14.180 "name": "Nvme$subsystem", 00:28:14.180 "trtype": "$TEST_TRANSPORT", 00:28:14.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.180 "adrfam": "ipv4", 00:28:14.180 "trsvcid": "$NVMF_PORT", 00:28:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.180 "hdgst": ${hdgst:-false}, 00:28:14.180 "ddgst": ${ddgst:-false} 00:28:14.180 }, 00:28:14.180 "method": "bdev_nvme_attach_controller" 00:28:14.180 } 00:28:14.180 EOF 00:28:14.180 )") 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:14.180 { 00:28:14.180 "params": { 00:28:14.180 "name": "Nvme$subsystem", 00:28:14.180 "trtype": "$TEST_TRANSPORT", 00:28:14.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.180 "adrfam": "ipv4", 00:28:14.180 "trsvcid": "$NVMF_PORT", 00:28:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.180 "hdgst": ${hdgst:-false}, 00:28:14.180 "ddgst": ${ddgst:-false} 00:28:14.180 }, 00:28:14.180 "method": "bdev_nvme_attach_controller" 00:28:14.180 } 00:28:14.180 EOF 00:28:14.180 )") 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:14.180 { 00:28:14.180 "params": { 00:28:14.180 "name": "Nvme$subsystem", 00:28:14.180 "trtype": "$TEST_TRANSPORT", 00:28:14.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.180 "adrfam": "ipv4", 00:28:14.180 "trsvcid": "$NVMF_PORT", 00:28:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.180 "hdgst": ${hdgst:-false}, 00:28:14.180 "ddgst": ${ddgst:-false} 00:28:14.180 }, 00:28:14.180 "method": "bdev_nvme_attach_controller" 00:28:14.180 } 00:28:14.180 EOF 00:28:14.180 )") 00:28:14.180 [2024-12-13 05:44:13.955520] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:14.180 [2024-12-13 05:44:13.955569] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:14.180 { 00:28:14.180 "params": { 00:28:14.180 "name": "Nvme$subsystem", 00:28:14.180 "trtype": "$TEST_TRANSPORT", 00:28:14.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.180 "adrfam": "ipv4", 00:28:14.180 "trsvcid": "$NVMF_PORT", 00:28:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.180 "hdgst": ${hdgst:-false}, 00:28:14.180 "ddgst": ${ddgst:-false} 00:28:14.180 }, 00:28:14.180 "method": "bdev_nvme_attach_controller" 00:28:14.180 } 00:28:14.180 EOF 00:28:14.180 )") 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:14.180 { 00:28:14.180 "params": { 00:28:14.180 "name": "Nvme$subsystem", 00:28:14.180 "trtype": "$TEST_TRANSPORT", 00:28:14.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.180 "adrfam": "ipv4", 00:28:14.180 "trsvcid": "$NVMF_PORT", 00:28:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.180 "hdgst": ${hdgst:-false}, 00:28:14.180 "ddgst": ${ddgst:-false} 00:28:14.180 }, 00:28:14.180 "method": "bdev_nvme_attach_controller" 00:28:14.180 } 00:28:14.180 EOF 00:28:14.180 )") 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:14.180 { 00:28:14.180 "params": { 00:28:14.180 "name": "Nvme$subsystem", 00:28:14.180 "trtype": "$TEST_TRANSPORT", 00:28:14.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.180 "adrfam": "ipv4", 00:28:14.180 "trsvcid": "$NVMF_PORT", 00:28:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.180 "hdgst": ${hdgst:-false}, 00:28:14.180 "ddgst": ${ddgst:-false} 00:28:14.180 }, 00:28:14.180 "method": "bdev_nvme_attach_controller" 00:28:14.180 } 00:28:14.180 EOF 00:28:14.180 )") 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:14.180 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:14.180 "params": { 00:28:14.180 "name": "Nvme1", 00:28:14.180 "trtype": "tcp", 00:28:14.180 "traddr": "10.0.0.2", 00:28:14.180 "adrfam": "ipv4", 00:28:14.180 "trsvcid": "4420", 00:28:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:14.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:14.180 "hdgst": false, 00:28:14.180 "ddgst": false 00:28:14.180 }, 00:28:14.180 "method": "bdev_nvme_attach_controller" 00:28:14.180 },{ 00:28:14.180 "params": { 00:28:14.180 "name": "Nvme2", 00:28:14.180 "trtype": "tcp", 00:28:14.180 "traddr": "10.0.0.2", 00:28:14.180 "adrfam": "ipv4", 00:28:14.180 "trsvcid": "4420", 00:28:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:14.180 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:14.180 "hdgst": false, 00:28:14.180 "ddgst": false 00:28:14.180 }, 00:28:14.180 "method": "bdev_nvme_attach_controller" 00:28:14.180 },{ 00:28:14.180 "params": { 00:28:14.180 "name": "Nvme3", 00:28:14.180 "trtype": "tcp", 00:28:14.180 "traddr": "10.0.0.2", 00:28:14.180 "adrfam": "ipv4", 00:28:14.180 "trsvcid": "4420", 00:28:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:14.180 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:14.180 "hdgst": false, 00:28:14.180 "ddgst": false 00:28:14.180 }, 00:28:14.180 "method": "bdev_nvme_attach_controller" 00:28:14.180 },{ 00:28:14.180 "params": { 00:28:14.180 "name": "Nvme4", 00:28:14.180 "trtype": "tcp", 00:28:14.180 "traddr": "10.0.0.2", 00:28:14.180 "adrfam": "ipv4", 00:28:14.180 "trsvcid": "4420", 00:28:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:14.180 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:14.180 "hdgst": false, 00:28:14.180 "ddgst": false 00:28:14.180 }, 00:28:14.180 "method": "bdev_nvme_attach_controller" 00:28:14.180 },{ 00:28:14.180 "params": { 00:28:14.180 "name": "Nvme5", 00:28:14.180 "trtype": "tcp", 00:28:14.180 "traddr": "10.0.0.2", 00:28:14.180 "adrfam": "ipv4", 00:28:14.180 "trsvcid": "4420", 00:28:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:14.180 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:14.180 "hdgst": false, 00:28:14.180 "ddgst": false 00:28:14.180 }, 00:28:14.180 "method": "bdev_nvme_attach_controller" 00:28:14.180 },{ 00:28:14.180 "params": { 00:28:14.180 "name": "Nvme6", 00:28:14.180 "trtype": "tcp", 00:28:14.180 "traddr": "10.0.0.2", 00:28:14.180 "adrfam": "ipv4", 00:28:14.180 "trsvcid": "4420", 00:28:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:14.180 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:14.180 "hdgst": false, 00:28:14.180 "ddgst": false 00:28:14.180 }, 00:28:14.180 "method": "bdev_nvme_attach_controller" 00:28:14.180 },{ 00:28:14.180 "params": { 00:28:14.181 "name": "Nvme7", 00:28:14.181 "trtype": "tcp", 00:28:14.181 "traddr": "10.0.0.2", 00:28:14.181 "adrfam": "ipv4", 00:28:14.181 "trsvcid": "4420", 00:28:14.181 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:14.181 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:14.181 "hdgst": false, 00:28:14.181 "ddgst": false 00:28:14.181 }, 00:28:14.181 "method": "bdev_nvme_attach_controller" 00:28:14.181 },{ 00:28:14.181 "params": { 00:28:14.181 "name": "Nvme8", 00:28:14.181 "trtype": "tcp", 00:28:14.181 "traddr": "10.0.0.2", 00:28:14.181 "adrfam": "ipv4", 00:28:14.181 "trsvcid": "4420", 00:28:14.181 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:14.181 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:14.181 "hdgst": false, 00:28:14.181 "ddgst": false 00:28:14.181 }, 00:28:14.181 "method": "bdev_nvme_attach_controller" 00:28:14.181 },{ 00:28:14.181 "params": { 00:28:14.181 "name": "Nvme9", 00:28:14.181 "trtype": "tcp", 00:28:14.181 "traddr": "10.0.0.2", 00:28:14.181 "adrfam": "ipv4", 00:28:14.181 "trsvcid": "4420", 00:28:14.181 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:14.181 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:14.181 "hdgst": false, 00:28:14.181 "ddgst": false 00:28:14.181 }, 00:28:14.181 "method": "bdev_nvme_attach_controller" 00:28:14.181 },{ 00:28:14.181 "params": { 00:28:14.181 "name": "Nvme10", 00:28:14.181 "trtype": "tcp", 00:28:14.181 "traddr": "10.0.0.2", 00:28:14.181 "adrfam": "ipv4", 00:28:14.181 "trsvcid": "4420", 00:28:14.181 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:14.181 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:14.181 "hdgst": false, 00:28:14.181 "ddgst": false 00:28:14.181 }, 00:28:14.181 "method": "bdev_nvme_attach_controller" 00:28:14.181 }' 00:28:14.181 [2024-12-13 05:44:14.030546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.181 [2024-12-13 05:44:14.052973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.084 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.084 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:16.084 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:16.084 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.084 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.084 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.084 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 425206 00:28:16.084 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:16.084 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:17.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 425206 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:17.020 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 425142 00:28:17.020 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:17.020 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:17.020 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.021 { 00:28:17.021 "params": { 00:28:17.021 "name": "Nvme$subsystem", 00:28:17.021 "trtype": "$TEST_TRANSPORT", 00:28:17.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.021 "adrfam": "ipv4", 00:28:17.021 "trsvcid": "$NVMF_PORT", 00:28:17.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.021 "hdgst": ${hdgst:-false}, 00:28:17.021 "ddgst": ${ddgst:-false} 00:28:17.021 }, 00:28:17.021 "method": "bdev_nvme_attach_controller" 00:28:17.021 } 00:28:17.021 EOF 00:28:17.021 )") 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.021 { 00:28:17.021 "params": { 00:28:17.021 "name": "Nvme$subsystem", 00:28:17.021 "trtype": "$TEST_TRANSPORT", 00:28:17.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.021 "adrfam": "ipv4", 00:28:17.021 "trsvcid": "$NVMF_PORT", 00:28:17.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.021 "hdgst": ${hdgst:-false}, 00:28:17.021 "ddgst": ${ddgst:-false} 00:28:17.021 }, 00:28:17.021 "method": "bdev_nvme_attach_controller" 00:28:17.021 } 00:28:17.021 EOF 00:28:17.021 )") 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.021 { 00:28:17.021 "params": { 00:28:17.021 "name": "Nvme$subsystem", 00:28:17.021 "trtype": "$TEST_TRANSPORT", 00:28:17.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.021 "adrfam": "ipv4", 00:28:17.021 "trsvcid": "$NVMF_PORT", 00:28:17.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.021 "hdgst": ${hdgst:-false}, 00:28:17.021 "ddgst": ${ddgst:-false} 00:28:17.021 }, 00:28:17.021 "method": "bdev_nvme_attach_controller" 00:28:17.021 } 00:28:17.021 EOF 00:28:17.021 )") 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.021 { 00:28:17.021 "params": { 00:28:17.021 "name": "Nvme$subsystem", 00:28:17.021 "trtype": "$TEST_TRANSPORT", 00:28:17.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.021 "adrfam": "ipv4", 00:28:17.021 "trsvcid": "$NVMF_PORT", 00:28:17.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.021 "hdgst": ${hdgst:-false}, 00:28:17.021 "ddgst": ${ddgst:-false} 00:28:17.021 }, 00:28:17.021 "method": "bdev_nvme_attach_controller" 00:28:17.021 } 00:28:17.021 EOF 00:28:17.021 )") 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.021 { 00:28:17.021 "params": { 00:28:17.021 "name": "Nvme$subsystem", 00:28:17.021 "trtype": "$TEST_TRANSPORT", 00:28:17.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.021 "adrfam": "ipv4", 00:28:17.021 "trsvcid": "$NVMF_PORT", 00:28:17.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.021 "hdgst": ${hdgst:-false}, 00:28:17.021 "ddgst": ${ddgst:-false} 00:28:17.021 }, 00:28:17.021 "method": "bdev_nvme_attach_controller" 00:28:17.021 } 00:28:17.021 EOF 00:28:17.021 )") 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.021 { 00:28:17.021 "params": { 00:28:17.021 "name": "Nvme$subsystem", 00:28:17.021 "trtype": "$TEST_TRANSPORT", 00:28:17.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.021 "adrfam": "ipv4", 00:28:17.021 "trsvcid": "$NVMF_PORT", 00:28:17.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.021 "hdgst": ${hdgst:-false}, 00:28:17.021 "ddgst": ${ddgst:-false} 00:28:17.021 }, 00:28:17.021 "method": "bdev_nvme_attach_controller" 00:28:17.021 } 00:28:17.021 EOF 00:28:17.021 )") 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.021 { 00:28:17.021 "params": { 00:28:17.021 "name": "Nvme$subsystem", 00:28:17.021 "trtype": "$TEST_TRANSPORT", 00:28:17.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.021 "adrfam": "ipv4", 00:28:17.021 "trsvcid": "$NVMF_PORT", 00:28:17.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.021 "hdgst": ${hdgst:-false}, 00:28:17.021 "ddgst": ${ddgst:-false} 00:28:17.021 }, 00:28:17.021 "method": "bdev_nvme_attach_controller" 00:28:17.021 } 00:28:17.021 EOF 00:28:17.021 )") 00:28:17.021 [2024-12-13 05:44:16.890240] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:17.021 [2024-12-13 05:44:16.890290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425680 ] 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.021 { 00:28:17.021 "params": { 00:28:17.021 "name": "Nvme$subsystem", 00:28:17.021 "trtype": "$TEST_TRANSPORT", 00:28:17.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.021 "adrfam": "ipv4", 00:28:17.021 "trsvcid": "$NVMF_PORT", 00:28:17.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.021 "hdgst": ${hdgst:-false}, 00:28:17.021 "ddgst": ${ddgst:-false} 00:28:17.021 }, 00:28:17.021 "method": "bdev_nvme_attach_controller" 00:28:17.021 } 00:28:17.021 EOF 00:28:17.021 )") 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.021 { 00:28:17.021 "params": { 00:28:17.021 "name": "Nvme$subsystem", 00:28:17.021 "trtype": "$TEST_TRANSPORT", 00:28:17.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.021 "adrfam": "ipv4", 00:28:17.021 "trsvcid": "$NVMF_PORT", 00:28:17.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.021 "hdgst": ${hdgst:-false}, 00:28:17.021 "ddgst": ${ddgst:-false} 00:28:17.021 }, 00:28:17.021 "method": "bdev_nvme_attach_controller" 00:28:17.021 } 00:28:17.021 EOF 00:28:17.021 )") 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.021 { 00:28:17.021 "params": { 00:28:17.021 "name": "Nvme$subsystem", 00:28:17.021 "trtype": "$TEST_TRANSPORT", 00:28:17.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.021 "adrfam": "ipv4", 00:28:17.021 "trsvcid": "$NVMF_PORT", 00:28:17.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.021 "hdgst": ${hdgst:-false}, 00:28:17.021 "ddgst": ${ddgst:-false} 00:28:17.021 }, 00:28:17.021 "method": "bdev_nvme_attach_controller" 00:28:17.021 } 00:28:17.021 EOF 00:28:17.021 )") 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:17.021 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:17.022 "params": { 00:28:17.022 "name": "Nvme1", 00:28:17.022 "trtype": "tcp", 00:28:17.022 "traddr": "10.0.0.2", 00:28:17.022 "adrfam": "ipv4", 00:28:17.022 "trsvcid": "4420", 00:28:17.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:17.022 "hdgst": false, 00:28:17.022 "ddgst": false 00:28:17.022 }, 00:28:17.022 "method": "bdev_nvme_attach_controller" 00:28:17.022 },{ 00:28:17.022 "params": { 00:28:17.022 "name": "Nvme2", 00:28:17.022 "trtype": "tcp", 00:28:17.022 "traddr": "10.0.0.2", 00:28:17.022 "adrfam": "ipv4", 00:28:17.022 "trsvcid": "4420", 00:28:17.022 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:17.022 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:17.022 "hdgst": false, 00:28:17.022 "ddgst": false 00:28:17.022 }, 00:28:17.022 "method": "bdev_nvme_attach_controller" 00:28:17.022 },{ 00:28:17.022 "params": { 00:28:17.022 "name": "Nvme3", 00:28:17.022 "trtype": "tcp", 00:28:17.022 "traddr": "10.0.0.2", 00:28:17.022 "adrfam": "ipv4", 00:28:17.022 "trsvcid": "4420", 00:28:17.022 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:17.022 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:17.022 "hdgst": false, 00:28:17.022 "ddgst": false 00:28:17.022 }, 00:28:17.022 "method": "bdev_nvme_attach_controller" 00:28:17.022 },{ 00:28:17.022 "params": { 00:28:17.022 "name": "Nvme4", 00:28:17.022 "trtype": "tcp", 00:28:17.022 "traddr": "10.0.0.2", 00:28:17.022 "adrfam": "ipv4", 00:28:17.022 "trsvcid": "4420", 00:28:17.022 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:17.022 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:17.022 "hdgst": false, 00:28:17.022 "ddgst": false 00:28:17.022 }, 00:28:17.022 "method": "bdev_nvme_attach_controller" 00:28:17.022 },{ 00:28:17.022 "params": { 00:28:17.022 "name": "Nvme5", 00:28:17.022 "trtype": "tcp", 00:28:17.022 "traddr": "10.0.0.2", 00:28:17.022 "adrfam": "ipv4", 00:28:17.022 "trsvcid": "4420", 00:28:17.022 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:17.022 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:17.022 "hdgst": false, 00:28:17.022 "ddgst": false 00:28:17.022 }, 00:28:17.022 "method": "bdev_nvme_attach_controller" 00:28:17.022 },{ 00:28:17.022 "params": { 00:28:17.022 "name": "Nvme6", 00:28:17.022 "trtype": "tcp", 00:28:17.022 "traddr": "10.0.0.2", 00:28:17.022 "adrfam": "ipv4", 00:28:17.022 "trsvcid": "4420", 00:28:17.022 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:17.022 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:17.022 "hdgst": false, 00:28:17.022 "ddgst": false 00:28:17.022 }, 00:28:17.022 "method": "bdev_nvme_attach_controller" 00:28:17.022 },{ 00:28:17.022 "params": { 00:28:17.022 "name": "Nvme7", 00:28:17.022 "trtype": "tcp", 00:28:17.022 "traddr": "10.0.0.2", 00:28:17.022 "adrfam": "ipv4", 00:28:17.022 "trsvcid": "4420", 00:28:17.022 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:17.022 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:17.022 "hdgst": false, 00:28:17.022 "ddgst": false 00:28:17.022 }, 00:28:17.022 "method": "bdev_nvme_attach_controller" 00:28:17.022 },{ 00:28:17.022 "params": { 00:28:17.022 "name": "Nvme8", 00:28:17.022 "trtype": "tcp", 00:28:17.022 "traddr": "10.0.0.2", 00:28:17.022 "adrfam": "ipv4", 00:28:17.022 "trsvcid": "4420", 00:28:17.022 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:17.022 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:17.022 "hdgst": false, 00:28:17.022 "ddgst": false 00:28:17.022 }, 00:28:17.022 "method": "bdev_nvme_attach_controller" 00:28:17.022 },{ 00:28:17.022 "params": { 00:28:17.022 "name": "Nvme9", 00:28:17.022 "trtype": "tcp", 00:28:17.022 "traddr": "10.0.0.2", 00:28:17.022 "adrfam": "ipv4", 00:28:17.022 "trsvcid": "4420", 00:28:17.022 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:17.022 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:17.022 "hdgst": false, 00:28:17.022 "ddgst": false 00:28:17.022 }, 00:28:17.022 "method": "bdev_nvme_attach_controller" 00:28:17.022 },{ 00:28:17.022 "params": { 00:28:17.022 "name": "Nvme10", 00:28:17.022 "trtype": "tcp", 00:28:17.022 "traddr": "10.0.0.2", 00:28:17.022 "adrfam": "ipv4", 00:28:17.022 "trsvcid": "4420", 00:28:17.022 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:17.022 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:17.022 "hdgst": false, 00:28:17.022 "ddgst": false 00:28:17.022 }, 00:28:17.022 "method": "bdev_nvme_attach_controller" 00:28:17.022 }' 00:28:17.022 [2024-12-13 05:44:16.970020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.022 [2024-12-13 05:44:16.992635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.927 Running I/O for 1 seconds... 00:28:19.864 2244.00 IOPS, 140.25 MiB/s 00:28:19.864 Latency(us) 00:28:19.864 [2024-12-13T04:44:19.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.864 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:19.864 Verification LBA range: start 0x0 length 0x400 00:28:19.864 Nvme1n1 : 1.15 278.55 17.41 0.00 0.00 227865.84 15666.22 218702.99 00:28:19.864 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:19.864 Verification LBA range: start 0x0 length 0x400 00:28:19.864 Nvme2n1 : 1.13 281.97 17.62 0.00 0.00 221865.89 26214.40 206719.27 00:28:19.864 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:19.864 Verification LBA range: start 0x0 length 0x400 00:28:19.864 Nvme3n1 : 1.14 284.33 17.77 0.00 0.00 216918.67 10485.76 238675.87 00:28:19.864 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:19.864 Verification LBA range: start 0x0 length 0x400 00:28:19.864 Nvme4n1 : 1.13 284.06 17.75 0.00 0.00 213453.68 16103.13 205720.62 00:28:19.864 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:19.864 Verification LBA range: start 0x0 length 0x400 00:28:19.864 Nvme5n1 : 1.15 281.39 17.59 0.00 0.00 212948.24 2527.82 203723.34 00:28:19.864 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:19.864 Verification LBA range: start 0x0 length 0x400 00:28:19.864 Nvme6n1 : 1.16 275.37 17.21 0.00 0.00 214896.05 18100.42 227690.79 00:28:19.864 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:19.864 Verification LBA range: start 0x0 length 0x400 00:28:19.864 Nvme7n1 : 1.16 275.90 17.24 0.00 0.00 211171.91 15291.73 226692.14 00:28:19.864 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:19.864 Verification LBA range: start 0x0 length 0x400 00:28:19.864 Nvme8n1 : 1.16 276.09 17.26 0.00 0.00 208206.31 13668.94 222697.57 00:28:19.864 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:19.864 Verification LBA range: start 0x0 length 0x400 00:28:19.864 Nvme9n1 : 1.17 274.16 17.13 0.00 0.00 206692.94 26838.55 219701.64 00:28:19.864 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:19.864 Verification LBA range: start 0x0 length 0x400 00:28:19.864 Nvme10n1 : 1.17 274.00 17.12 0.00 0.00 203843.49 16727.28 236678.58 00:28:19.864 [2024-12-13T04:44:19.879Z] =================================================================================================================== 00:28:19.864 [2024-12-13T04:44:19.879Z] Total : 2785.83 174.11 0.00 0.00 213788.90 2527.82 238675.87 00:28:19.864 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:19.864 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:19.864 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:19.864 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:19.864 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:20.123 rmmod nvme_tcp 00:28:20.123 rmmod nvme_fabrics 00:28:20.123 rmmod nvme_keyring 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 425142 ']' 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 425142 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 425142 ']' 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 425142 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 425142 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:20.123 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:20.124 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 425142' 00:28:20.124 killing process with pid 425142 00:28:20.124 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 425142 00:28:20.124 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 425142 00:28:20.382 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:20.382 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:20.382 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:20.382 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:20.382 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:20.382 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:20.382 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:20.382 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.382 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:20.382 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.382 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.382 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:22.918 00:28:22.918 real 0m15.231s 00:28:22.918 user 0m34.357s 00:28:22.918 sys 0m5.782s 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:22.918 ************************************ 00:28:22.918 END TEST nvmf_shutdown_tc1 00:28:22.918 ************************************ 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:22.918 ************************************ 00:28:22.918 START TEST nvmf_shutdown_tc2 00:28:22.918 ************************************ 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:22.918 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:22.919 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:22.919 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:22.919 Found net devices under 0000:af:00.0: cvl_0_0 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:22.919 Found net devices under 0000:af:00.1: cvl_0_1 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.919 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:28:22.920 00:28:22.920 --- 10.0.0.2 ping statistics --- 00:28:22.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.920 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:28:22.920 00:28:22.920 --- 10.0.0.1 ping statistics --- 00:28:22.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.920 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=426788 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 426788 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 426788 ']' 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.920 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:22.920 [2024-12-13 05:44:22.864296] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:22.920 [2024-12-13 05:44:22.864344] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.179 [2024-12-13 05:44:22.941199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:23.179 [2024-12-13 05:44:22.963422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.179 [2024-12-13 05:44:22.963463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.179 [2024-12-13 05:44:22.963471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.179 [2024-12-13 05:44:22.963477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.179 [2024-12-13 05:44:22.963482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.179 [2024-12-13 05:44:22.964832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.179 [2024-12-13 05:44:22.964938] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:23.179 [2024-12-13 05:44:22.965042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.179 [2024-12-13 05:44:22.965043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:23.179 [2024-12-13 05:44:23.096192] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.179 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:23.180 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.180 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:23.180 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.180 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:23.180 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.180 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:23.180 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:23.180 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:23.180 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:23.180 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.180 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:23.180 Malloc1 00:28:23.439 [2024-12-13 05:44:23.199782] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.439 Malloc2 00:28:23.439 Malloc3 00:28:23.439 Malloc4 00:28:23.439 Malloc5 00:28:23.439 Malloc6 00:28:23.439 Malloc7 00:28:23.698 Malloc8 00:28:23.698 Malloc9 00:28:23.698 Malloc10 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=426954 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 426954 /var/tmp/bdevperf.sock 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 426954 ']' 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:23.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:23.698 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:23.698 { 00:28:23.698 "params": { 00:28:23.698 "name": "Nvme$subsystem", 00:28:23.698 "trtype": "$TEST_TRANSPORT", 00:28:23.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.698 "adrfam": "ipv4", 00:28:23.698 "trsvcid": "$NVMF_PORT", 00:28:23.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.699 "hdgst": ${hdgst:-false}, 00:28:23.699 "ddgst": ${ddgst:-false} 00:28:23.699 }, 00:28:23.699 "method": "bdev_nvme_attach_controller" 00:28:23.699 } 00:28:23.699 EOF 00:28:23.699 )") 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:23.699 { 00:28:23.699 "params": { 00:28:23.699 "name": "Nvme$subsystem", 00:28:23.699 "trtype": "$TEST_TRANSPORT", 00:28:23.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.699 "adrfam": "ipv4", 00:28:23.699 "trsvcid": "$NVMF_PORT", 00:28:23.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.699 "hdgst": ${hdgst:-false}, 00:28:23.699 "ddgst": ${ddgst:-false} 00:28:23.699 }, 00:28:23.699 "method": "bdev_nvme_attach_controller" 00:28:23.699 } 00:28:23.699 EOF 00:28:23.699 )") 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:23.699 { 00:28:23.699 "params": { 00:28:23.699 "name": "Nvme$subsystem", 00:28:23.699 "trtype": "$TEST_TRANSPORT", 00:28:23.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.699 "adrfam": "ipv4", 00:28:23.699 "trsvcid": "$NVMF_PORT", 00:28:23.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.699 "hdgst": ${hdgst:-false}, 00:28:23.699 "ddgst": ${ddgst:-false} 00:28:23.699 }, 00:28:23.699 "method": "bdev_nvme_attach_controller" 00:28:23.699 } 00:28:23.699 EOF 00:28:23.699 )") 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:23.699 { 00:28:23.699 "params": { 00:28:23.699 "name": "Nvme$subsystem", 00:28:23.699 "trtype": "$TEST_TRANSPORT", 00:28:23.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.699 "adrfam": "ipv4", 00:28:23.699 "trsvcid": "$NVMF_PORT", 00:28:23.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.699 "hdgst": ${hdgst:-false}, 00:28:23.699 "ddgst": ${ddgst:-false} 00:28:23.699 }, 00:28:23.699 "method": "bdev_nvme_attach_controller" 00:28:23.699 } 00:28:23.699 EOF 00:28:23.699 )") 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:23.699 { 00:28:23.699 "params": { 00:28:23.699 "name": "Nvme$subsystem", 00:28:23.699 "trtype": "$TEST_TRANSPORT", 00:28:23.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.699 "adrfam": "ipv4", 00:28:23.699 "trsvcid": "$NVMF_PORT", 00:28:23.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.699 "hdgst": ${hdgst:-false}, 00:28:23.699 "ddgst": ${ddgst:-false} 00:28:23.699 }, 00:28:23.699 "method": "bdev_nvme_attach_controller" 00:28:23.699 } 00:28:23.699 EOF 00:28:23.699 )") 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:23.699 { 00:28:23.699 "params": { 00:28:23.699 "name": "Nvme$subsystem", 00:28:23.699 "trtype": "$TEST_TRANSPORT", 00:28:23.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.699 "adrfam": "ipv4", 00:28:23.699 "trsvcid": "$NVMF_PORT", 00:28:23.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.699 "hdgst": ${hdgst:-false}, 00:28:23.699 "ddgst": ${ddgst:-false} 00:28:23.699 }, 00:28:23.699 "method": "bdev_nvme_attach_controller" 00:28:23.699 } 00:28:23.699 EOF 00:28:23.699 )") 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:23.699 { 00:28:23.699 "params": { 00:28:23.699 "name": "Nvme$subsystem", 00:28:23.699 "trtype": "$TEST_TRANSPORT", 00:28:23.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.699 "adrfam": "ipv4", 00:28:23.699 "trsvcid": "$NVMF_PORT", 00:28:23.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.699 "hdgst": ${hdgst:-false}, 00:28:23.699 "ddgst": ${ddgst:-false} 00:28:23.699 }, 00:28:23.699 "method": "bdev_nvme_attach_controller" 00:28:23.699 } 00:28:23.699 EOF 00:28:23.699 )") 00:28:23.699 [2024-12-13 05:44:23.669924] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:23.699 [2024-12-13 05:44:23.669971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426954 ] 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:23.699 { 00:28:23.699 "params": { 00:28:23.699 "name": "Nvme$subsystem", 00:28:23.699 "trtype": "$TEST_TRANSPORT", 00:28:23.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.699 "adrfam": "ipv4", 00:28:23.699 "trsvcid": "$NVMF_PORT", 00:28:23.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.699 "hdgst": ${hdgst:-false}, 00:28:23.699 "ddgst": ${ddgst:-false} 00:28:23.699 }, 00:28:23.699 "method": "bdev_nvme_attach_controller" 00:28:23.699 } 00:28:23.699 EOF 00:28:23.699 )") 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:23.699 { 00:28:23.699 "params": { 00:28:23.699 "name": "Nvme$subsystem", 00:28:23.699 "trtype": "$TEST_TRANSPORT", 00:28:23.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.699 "adrfam": "ipv4", 00:28:23.699 "trsvcid": "$NVMF_PORT", 00:28:23.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.699 "hdgst": ${hdgst:-false}, 00:28:23.699 "ddgst": ${ddgst:-false} 00:28:23.699 }, 00:28:23.699 "method": "bdev_nvme_attach_controller" 00:28:23.699 } 00:28:23.699 EOF 00:28:23.699 )") 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:23.699 { 00:28:23.699 "params": { 00:28:23.699 "name": "Nvme$subsystem", 00:28:23.699 "trtype": "$TEST_TRANSPORT", 00:28:23.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.699 "adrfam": "ipv4", 00:28:23.699 "trsvcid": "$NVMF_PORT", 00:28:23.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.699 "hdgst": ${hdgst:-false}, 00:28:23.699 "ddgst": ${ddgst:-false} 00:28:23.699 }, 00:28:23.699 "method": "bdev_nvme_attach_controller" 00:28:23.699 } 00:28:23.699 EOF 00:28:23.699 )") 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:23.699 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:23.699 "params": { 00:28:23.699 "name": "Nvme1", 00:28:23.699 "trtype": "tcp", 00:28:23.699 "traddr": "10.0.0.2", 00:28:23.699 "adrfam": "ipv4", 00:28:23.699 "trsvcid": "4420", 00:28:23.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:23.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:23.699 "hdgst": false, 00:28:23.699 "ddgst": false 00:28:23.699 }, 00:28:23.699 "method": "bdev_nvme_attach_controller" 00:28:23.699 },{ 00:28:23.699 "params": { 00:28:23.699 "name": "Nvme2", 00:28:23.699 "trtype": "tcp", 00:28:23.699 "traddr": "10.0.0.2", 00:28:23.699 "adrfam": "ipv4", 00:28:23.699 "trsvcid": "4420", 00:28:23.699 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:23.699 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:23.699 "hdgst": false, 00:28:23.699 "ddgst": false 00:28:23.699 }, 00:28:23.700 "method": "bdev_nvme_attach_controller" 00:28:23.700 },{ 00:28:23.700 "params": { 00:28:23.700 "name": "Nvme3", 00:28:23.700 "trtype": "tcp", 00:28:23.700 "traddr": "10.0.0.2", 00:28:23.700 "adrfam": "ipv4", 00:28:23.700 "trsvcid": "4420", 00:28:23.700 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:23.700 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:23.700 "hdgst": false, 00:28:23.700 "ddgst": false 00:28:23.700 }, 00:28:23.700 "method": "bdev_nvme_attach_controller" 00:28:23.700 },{ 00:28:23.700 "params": { 00:28:23.700 "name": "Nvme4", 00:28:23.700 "trtype": "tcp", 00:28:23.700 "traddr": "10.0.0.2", 00:28:23.700 "adrfam": "ipv4", 00:28:23.700 "trsvcid": "4420", 00:28:23.700 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:23.700 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:23.700 "hdgst": false, 00:28:23.700 "ddgst": false 00:28:23.700 }, 00:28:23.700 "method": "bdev_nvme_attach_controller" 00:28:23.700 },{ 00:28:23.700 "params": { 00:28:23.700 "name": "Nvme5", 00:28:23.700 "trtype": "tcp", 00:28:23.700 "traddr": "10.0.0.2", 00:28:23.700 "adrfam": "ipv4", 00:28:23.700 "trsvcid": "4420", 00:28:23.700 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:23.700 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:23.700 "hdgst": false, 00:28:23.700 "ddgst": false 00:28:23.700 }, 00:28:23.700 "method": "bdev_nvme_attach_controller" 00:28:23.700 },{ 00:28:23.700 "params": { 00:28:23.700 "name": "Nvme6", 00:28:23.700 "trtype": "tcp", 00:28:23.700 "traddr": "10.0.0.2", 00:28:23.700 "adrfam": "ipv4", 00:28:23.700 "trsvcid": "4420", 00:28:23.700 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:23.700 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:23.700 "hdgst": false, 00:28:23.700 "ddgst": false 00:28:23.700 }, 00:28:23.700 "method": "bdev_nvme_attach_controller" 00:28:23.700 },{ 00:28:23.700 "params": { 00:28:23.700 "name": "Nvme7", 00:28:23.700 "trtype": "tcp", 00:28:23.700 "traddr": "10.0.0.2", 00:28:23.700 "adrfam": "ipv4", 00:28:23.700 "trsvcid": "4420", 00:28:23.700 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:23.700 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:23.700 "hdgst": false, 00:28:23.700 "ddgst": false 00:28:23.700 }, 00:28:23.700 "method": "bdev_nvme_attach_controller" 00:28:23.700 },{ 00:28:23.700 "params": { 00:28:23.700 "name": "Nvme8", 00:28:23.700 "trtype": "tcp", 00:28:23.700 "traddr": "10.0.0.2", 00:28:23.700 "adrfam": "ipv4", 00:28:23.700 "trsvcid": "4420", 00:28:23.700 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:23.700 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:23.700 "hdgst": false, 00:28:23.700 "ddgst": false 00:28:23.700 }, 00:28:23.700 "method": "bdev_nvme_attach_controller" 00:28:23.700 },{ 00:28:23.700 "params": { 00:28:23.700 "name": "Nvme9", 00:28:23.700 "trtype": "tcp", 00:28:23.700 "traddr": "10.0.0.2", 00:28:23.700 "adrfam": "ipv4", 00:28:23.700 "trsvcid": "4420", 00:28:23.700 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:23.700 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:23.700 "hdgst": false, 00:28:23.700 "ddgst": false 00:28:23.700 }, 00:28:23.700 "method": "bdev_nvme_attach_controller" 00:28:23.700 },{ 00:28:23.700 "params": { 00:28:23.700 "name": "Nvme10", 00:28:23.700 "trtype": "tcp", 00:28:23.700 "traddr": "10.0.0.2", 00:28:23.700 "adrfam": "ipv4", 00:28:23.700 "trsvcid": "4420", 00:28:23.700 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:23.700 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:23.700 "hdgst": false, 00:28:23.700 "ddgst": false 00:28:23.700 }, 00:28:23.700 "method": "bdev_nvme_attach_controller" 00:28:23.700 }' 00:28:23.959 [2024-12-13 05:44:23.746421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.959 [2024-12-13 05:44:23.768626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.336 Running I/O for 10 seconds... 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.593 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.852 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:25.852 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:25.852 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:25.852 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:25.852 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:25.852 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 426954 00:28:25.852 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 426954 ']' 00:28:25.852 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 426954 00:28:25.852 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:25.853 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:25.853 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 426954 00:28:25.853 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:25.853 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:25.853 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 426954' 00:28:25.853 killing process with pid 426954 00:28:25.853 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 426954 00:28:25.853 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 426954 00:28:25.853 Received shutdown signal, test time was about 0.728728 seconds 00:28:25.853 00:28:25.853 Latency(us) 00:28:25.853 [2024-12-13T04:44:25.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.853 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:25.853 Verification LBA range: start 0x0 length 0x400 00:28:25.853 Nvme1n1 : 0.73 348.86 21.80 0.00 0.00 180794.58 15104.49 216705.71 00:28:25.853 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:25.853 Verification LBA range: start 0x0 length 0x400 00:28:25.853 Nvme2n1 : 0.70 275.67 17.23 0.00 0.00 223806.74 17351.44 203723.34 00:28:25.853 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:25.853 Verification LBA range: start 0x0 length 0x400 00:28:25.853 Nvme3n1 : 0.70 295.84 18.49 0.00 0.00 200272.19 7208.96 199728.76 00:28:25.853 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:25.853 Verification LBA range: start 0x0 length 0x400 00:28:25.853 Nvme4n1 : 0.69 277.53 17.35 0.00 0.00 211913.71 21720.50 186746.39 00:28:25.853 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:25.853 Verification LBA range: start 0x0 length 0x400 00:28:25.853 Nvme5n1 : 0.70 272.39 17.02 0.00 0.00 211206.01 17101.78 209715.20 00:28:25.853 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:25.853 Verification LBA range: start 0x0 length 0x400 00:28:25.853 Nvme6n1 : 0.73 264.63 16.54 0.00 0.00 212975.42 26464.06 203723.34 00:28:25.853 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:25.853 Verification LBA range: start 0x0 length 0x400 00:28:25.853 Nvme7n1 : 0.71 270.97 16.94 0.00 0.00 202147.51 14730.00 214708.42 00:28:25.853 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:25.853 Verification LBA range: start 0x0 length 0x400 00:28:25.853 Nvme8n1 : 0.71 268.62 16.79 0.00 0.00 199420.75 13856.18 211712.49 00:28:25.853 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:25.853 Verification LBA range: start 0x0 length 0x400 00:28:25.853 Nvme9n1 : 0.72 266.60 16.66 0.00 0.00 196103.48 31582.11 213709.78 00:28:25.853 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:25.853 Verification LBA range: start 0x0 length 0x400 00:28:25.853 Nvme10n1 : 0.72 265.70 16.61 0.00 0.00 191889.80 17476.27 228689.43 00:28:25.853 [2024-12-13T04:44:25.868Z] =================================================================================================================== 00:28:25.853 [2024-12-13T04:44:25.868Z] Total : 2806.81 175.43 0.00 0.00 202340.05 7208.96 228689.43 00:28:26.111 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 426788 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:27.048 rmmod nvme_tcp 00:28:27.048 rmmod nvme_fabrics 00:28:27.048 rmmod nvme_keyring 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:27.048 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 426788 ']' 00:28:27.048 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 426788 00:28:27.048 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 426788 ']' 00:28:27.048 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 426788 00:28:27.048 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:27.048 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:27.048 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 426788 00:28:27.048 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:27.048 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:27.048 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 426788' 00:28:27.048 killing process with pid 426788 00:28:27.048 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 426788 00:28:27.048 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 426788 00:28:27.616 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:27.616 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:27.616 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:27.616 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:27.616 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:27.616 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:27.616 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:27.616 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:27.616 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:27.616 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.616 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.616 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.518 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:29.518 00:28:29.518 real 0m6.982s 00:28:29.518 user 0m19.895s 00:28:29.518 sys 0m1.276s 00:28:29.519 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.519 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.519 ************************************ 00:28:29.519 END TEST nvmf_shutdown_tc2 00:28:29.519 ************************************ 00:28:29.519 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:29.519 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:29.519 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.519 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:29.778 ************************************ 00:28:29.778 START TEST nvmf_shutdown_tc3 00:28:29.778 ************************************ 00:28:29.778 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:29.778 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:29.778 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:29.778 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:29.778 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.778 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:29.778 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:29.778 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:29.778 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.778 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.778 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.778 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:29.778 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:29.779 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:29.779 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:29.779 Found net devices under 0000:af:00.0: cvl_0_0 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:29.779 Found net devices under 0000:af:00.1: cvl_0_1 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:29.779 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.780 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:30.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:28:30.039 00:28:30.039 --- 10.0.0.2 ping statistics --- 00:28:30.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.039 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:28:30.039 00:28:30.039 --- 10.0.0.1 ping statistics --- 00:28:30.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.039 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:30.039 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=428063 00:28:30.040 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 428063 00:28:30.040 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:30.040 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 428063 ']' 00:28:30.040 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.040 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:30.040 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.040 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:30.040 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:30.040 [2024-12-13 05:44:29.914068] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:30.040 [2024-12-13 05:44:29.914120] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.040 [2024-12-13 05:44:29.994155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:30.040 [2024-12-13 05:44:30.019259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.040 [2024-12-13 05:44:30.019296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.040 [2024-12-13 05:44:30.019303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.040 [2024-12-13 05:44:30.019309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.040 [2024-12-13 05:44:30.019314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.040 [2024-12-13 05:44:30.020673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:30.040 [2024-12-13 05:44:30.020778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:30.040 [2024-12-13 05:44:30.020883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.040 [2024-12-13 05:44:30.020885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:30.299 [2024-12-13 05:44:30.150748] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.299 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:30.300 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.300 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:30.300 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.300 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:30.300 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.300 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:30.300 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.300 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:30.300 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.300 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:30.300 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:30.300 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.300 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:30.300 Malloc1 00:28:30.300 [2024-12-13 05:44:30.269964] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.300 Malloc2 00:28:30.558 Malloc3 00:28:30.559 Malloc4 00:28:30.559 Malloc5 00:28:30.559 Malloc6 00:28:30.559 Malloc7 00:28:30.559 Malloc8 00:28:30.818 Malloc9 00:28:30.818 Malloc10 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=428237 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 428237 /var/tmp/bdevperf.sock 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 428237 ']' 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:30.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.818 { 00:28:30.818 "params": { 00:28:30.818 "name": "Nvme$subsystem", 00:28:30.818 "trtype": "$TEST_TRANSPORT", 00:28:30.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.818 "adrfam": "ipv4", 00:28:30.818 "trsvcid": "$NVMF_PORT", 00:28:30.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.818 "hdgst": ${hdgst:-false}, 00:28:30.818 "ddgst": ${ddgst:-false} 00:28:30.818 }, 00:28:30.818 "method": "bdev_nvme_attach_controller" 00:28:30.818 } 00:28:30.818 EOF 00:28:30.818 )") 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.818 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.818 { 00:28:30.818 "params": { 00:28:30.818 "name": "Nvme$subsystem", 00:28:30.818 "trtype": "$TEST_TRANSPORT", 00:28:30.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.818 "adrfam": "ipv4", 00:28:30.818 "trsvcid": "$NVMF_PORT", 00:28:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.819 "hdgst": ${hdgst:-false}, 00:28:30.819 "ddgst": ${ddgst:-false} 00:28:30.819 }, 00:28:30.819 "method": "bdev_nvme_attach_controller" 00:28:30.819 } 00:28:30.819 EOF 00:28:30.819 )") 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.819 { 00:28:30.819 "params": { 00:28:30.819 "name": "Nvme$subsystem", 00:28:30.819 "trtype": "$TEST_TRANSPORT", 00:28:30.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.819 "adrfam": "ipv4", 00:28:30.819 "trsvcid": "$NVMF_PORT", 00:28:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.819 "hdgst": ${hdgst:-false}, 00:28:30.819 "ddgst": ${ddgst:-false} 00:28:30.819 }, 00:28:30.819 "method": "bdev_nvme_attach_controller" 00:28:30.819 } 00:28:30.819 EOF 00:28:30.819 )") 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.819 { 00:28:30.819 "params": { 00:28:30.819 "name": "Nvme$subsystem", 00:28:30.819 "trtype": "$TEST_TRANSPORT", 00:28:30.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.819 "adrfam": "ipv4", 00:28:30.819 "trsvcid": "$NVMF_PORT", 00:28:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.819 "hdgst": ${hdgst:-false}, 00:28:30.819 "ddgst": ${ddgst:-false} 00:28:30.819 }, 00:28:30.819 "method": "bdev_nvme_attach_controller" 00:28:30.819 } 00:28:30.819 EOF 00:28:30.819 )") 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.819 { 00:28:30.819 "params": { 00:28:30.819 "name": "Nvme$subsystem", 00:28:30.819 "trtype": "$TEST_TRANSPORT", 00:28:30.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.819 "adrfam": "ipv4", 00:28:30.819 "trsvcid": "$NVMF_PORT", 00:28:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.819 "hdgst": ${hdgst:-false}, 00:28:30.819 "ddgst": ${ddgst:-false} 00:28:30.819 }, 00:28:30.819 "method": "bdev_nvme_attach_controller" 00:28:30.819 } 00:28:30.819 EOF 00:28:30.819 )") 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.819 { 00:28:30.819 "params": { 00:28:30.819 "name": "Nvme$subsystem", 00:28:30.819 "trtype": "$TEST_TRANSPORT", 00:28:30.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.819 "adrfam": "ipv4", 00:28:30.819 "trsvcid": "$NVMF_PORT", 00:28:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.819 "hdgst": ${hdgst:-false}, 00:28:30.819 "ddgst": ${ddgst:-false} 00:28:30.819 }, 00:28:30.819 "method": "bdev_nvme_attach_controller" 00:28:30.819 } 00:28:30.819 EOF 00:28:30.819 )") 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.819 { 00:28:30.819 "params": { 00:28:30.819 "name": "Nvme$subsystem", 00:28:30.819 "trtype": "$TEST_TRANSPORT", 00:28:30.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.819 "adrfam": "ipv4", 00:28:30.819 "trsvcid": "$NVMF_PORT", 00:28:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.819 "hdgst": ${hdgst:-false}, 00:28:30.819 "ddgst": ${ddgst:-false} 00:28:30.819 }, 00:28:30.819 "method": "bdev_nvme_attach_controller" 00:28:30.819 } 00:28:30.819 EOF 00:28:30.819 )") 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:30.819 [2024-12-13 05:44:30.746364] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:30.819 [2024-12-13 05:44:30.746413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428237 ] 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.819 { 00:28:30.819 "params": { 00:28:30.819 "name": "Nvme$subsystem", 00:28:30.819 "trtype": "$TEST_TRANSPORT", 00:28:30.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.819 "adrfam": "ipv4", 00:28:30.819 "trsvcid": "$NVMF_PORT", 00:28:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.819 "hdgst": ${hdgst:-false}, 00:28:30.819 "ddgst": ${ddgst:-false} 00:28:30.819 }, 00:28:30.819 "method": "bdev_nvme_attach_controller" 00:28:30.819 } 00:28:30.819 EOF 00:28:30.819 )") 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.819 { 00:28:30.819 "params": { 00:28:30.819 "name": "Nvme$subsystem", 00:28:30.819 "trtype": "$TEST_TRANSPORT", 00:28:30.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.819 "adrfam": "ipv4", 00:28:30.819 "trsvcid": "$NVMF_PORT", 00:28:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.819 "hdgst": ${hdgst:-false}, 00:28:30.819 "ddgst": ${ddgst:-false} 00:28:30.819 }, 00:28:30.819 "method": "bdev_nvme_attach_controller" 00:28:30.819 } 00:28:30.819 EOF 00:28:30.819 )") 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.819 { 00:28:30.819 "params": { 00:28:30.819 "name": "Nvme$subsystem", 00:28:30.819 "trtype": "$TEST_TRANSPORT", 00:28:30.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.819 "adrfam": "ipv4", 00:28:30.819 "trsvcid": "$NVMF_PORT", 00:28:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.819 "hdgst": ${hdgst:-false}, 00:28:30.819 "ddgst": ${ddgst:-false} 00:28:30.819 }, 00:28:30.819 "method": "bdev_nvme_attach_controller" 00:28:30.819 } 00:28:30.819 EOF 00:28:30.819 )") 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:30.819 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:30.819 "params": { 00:28:30.819 "name": "Nvme1", 00:28:30.819 "trtype": "tcp", 00:28:30.819 "traddr": "10.0.0.2", 00:28:30.819 "adrfam": "ipv4", 00:28:30.819 "trsvcid": "4420", 00:28:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:30.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:30.819 "hdgst": false, 00:28:30.819 "ddgst": false 00:28:30.819 }, 00:28:30.819 "method": "bdev_nvme_attach_controller" 00:28:30.819 },{ 00:28:30.819 "params": { 00:28:30.819 "name": "Nvme2", 00:28:30.819 "trtype": "tcp", 00:28:30.819 "traddr": "10.0.0.2", 00:28:30.819 "adrfam": "ipv4", 00:28:30.819 "trsvcid": "4420", 00:28:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:30.819 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:30.819 "hdgst": false, 00:28:30.819 "ddgst": false 00:28:30.819 }, 00:28:30.819 "method": "bdev_nvme_attach_controller" 00:28:30.819 },{ 00:28:30.819 "params": { 00:28:30.819 "name": "Nvme3", 00:28:30.819 "trtype": "tcp", 00:28:30.819 "traddr": "10.0.0.2", 00:28:30.819 "adrfam": "ipv4", 00:28:30.819 "trsvcid": "4420", 00:28:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:30.819 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:30.819 "hdgst": false, 00:28:30.819 "ddgst": false 00:28:30.819 }, 00:28:30.819 "method": "bdev_nvme_attach_controller" 00:28:30.819 },{ 00:28:30.819 "params": { 00:28:30.819 "name": "Nvme4", 00:28:30.819 "trtype": "tcp", 00:28:30.819 "traddr": "10.0.0.2", 00:28:30.819 "adrfam": "ipv4", 00:28:30.820 "trsvcid": "4420", 00:28:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:30.820 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:30.820 "hdgst": false, 00:28:30.820 "ddgst": false 00:28:30.820 }, 00:28:30.820 "method": "bdev_nvme_attach_controller" 00:28:30.820 },{ 00:28:30.820 "params": { 00:28:30.820 "name": "Nvme5", 00:28:30.820 "trtype": "tcp", 00:28:30.820 "traddr": "10.0.0.2", 00:28:30.820 "adrfam": "ipv4", 00:28:30.820 "trsvcid": "4420", 00:28:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:30.820 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:30.820 "hdgst": false, 00:28:30.820 "ddgst": false 00:28:30.820 }, 00:28:30.820 "method": "bdev_nvme_attach_controller" 00:28:30.820 },{ 00:28:30.820 "params": { 00:28:30.820 "name": "Nvme6", 00:28:30.820 "trtype": "tcp", 00:28:30.820 "traddr": "10.0.0.2", 00:28:30.820 "adrfam": "ipv4", 00:28:30.820 "trsvcid": "4420", 00:28:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:30.820 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:30.820 "hdgst": false, 00:28:30.820 "ddgst": false 00:28:30.820 }, 00:28:30.820 "method": "bdev_nvme_attach_controller" 00:28:30.820 },{ 00:28:30.820 "params": { 00:28:30.820 "name": "Nvme7", 00:28:30.820 "trtype": "tcp", 00:28:30.820 "traddr": "10.0.0.2", 00:28:30.820 "adrfam": "ipv4", 00:28:30.820 "trsvcid": "4420", 00:28:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:30.820 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:30.820 "hdgst": false, 00:28:30.820 "ddgst": false 00:28:30.820 }, 00:28:30.820 "method": "bdev_nvme_attach_controller" 00:28:30.820 },{ 00:28:30.820 "params": { 00:28:30.820 "name": "Nvme8", 00:28:30.820 "trtype": "tcp", 00:28:30.820 "traddr": "10.0.0.2", 00:28:30.820 "adrfam": "ipv4", 00:28:30.820 "trsvcid": "4420", 00:28:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:30.820 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:30.820 "hdgst": false, 00:28:30.820 "ddgst": false 00:28:30.820 }, 00:28:30.820 "method": "bdev_nvme_attach_controller" 00:28:30.820 },{ 00:28:30.820 "params": { 00:28:30.820 "name": "Nvme9", 00:28:30.820 "trtype": "tcp", 00:28:30.820 "traddr": "10.0.0.2", 00:28:30.820 "adrfam": "ipv4", 00:28:30.820 "trsvcid": "4420", 00:28:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:30.820 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:30.820 "hdgst": false, 00:28:30.820 "ddgst": false 00:28:30.820 }, 00:28:30.820 "method": "bdev_nvme_attach_controller" 00:28:30.820 },{ 00:28:30.820 "params": { 00:28:30.820 "name": "Nvme10", 00:28:30.820 "trtype": "tcp", 00:28:30.820 "traddr": "10.0.0.2", 00:28:30.820 "adrfam": "ipv4", 00:28:30.820 "trsvcid": "4420", 00:28:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:30.820 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:30.820 "hdgst": false, 00:28:30.820 "ddgst": false 00:28:30.820 }, 00:28:30.820 "method": "bdev_nvme_attach_controller" 00:28:30.820 }' 00:28:30.820 [2024-12-13 05:44:30.823318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.078 [2024-12-13 05:44:30.846110] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.456 Running I/O for 10 seconds... 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:32.715 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:32.974 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:32.974 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:32.974 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:32.974 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:32.974 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.974 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.249 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 428063 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 428063 ']' 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 428063 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 428063 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 428063' 00:28:33.249 killing process with pid 428063 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 428063 00:28:33.249 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 428063 00:28:33.249 [2024-12-13 05:44:33.080196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dab40 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.080245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dab40 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.080255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dab40 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.249 [2024-12-13 05:44:33.081734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.081825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452650 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.082928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.082939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.082947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.082953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.082959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.082967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.082973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.082980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.082985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.082991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.082998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.083332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db030 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.084582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.084605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.084613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.084620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.084626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.084633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.084639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.084645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.250 [2024-12-13 05:44:33.084651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.084995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.085001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.085007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db500 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.251 [2024-12-13 05:44:33.086263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.086457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db9f0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.252 [2024-12-13 05:44:33.087455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.087465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.087471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.087478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.087484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.087490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.087496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.087502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.087509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dbec0 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.089405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc880 is same with the state(6) to be set 00:28:33.253 [2024-12-13 05:44:33.090209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.090529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dcd70 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.254 [2024-12-13 05:44:33.091249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.091254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.091261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.091267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.091273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.091279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.091285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.091291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.091298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.091305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.091311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.091317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.091323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.091329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.092124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1467970 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.092235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a960 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.092323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1488a70 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.092408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58610 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.092499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1460c00 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.092578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102b420 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.092653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10330b0 is same with the state(6) to be set 00:28:33.255 [2024-12-13 05:44:33.092732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.255 [2024-12-13 05:44:33.092773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.255 [2024-12-13 05:44:33.092780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.092788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101d270 is same with the state(6) to be set 00:28:33.256 [2024-12-13 05:44:33.092810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.256 [2024-12-13 05:44:33.092818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.092825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.256 [2024-12-13 05:44:33.092832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.092838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.256 [2024-12-13 05:44:33.092844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.092851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.256 [2024-12-13 05:44:33.092857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.092863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1028c40 is same with the state(6) to be set 00:28:33.256 [2024-12-13 05:44:33.093133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.256 [2024-12-13 05:44:33.093679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.256 [2024-12-13 05:44:33.093687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.093986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.093992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.257 [2024-12-13 05:44:33.094393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.257 [2024-12-13 05:44:33.094565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.257 [2024-12-13 05:44:33.094571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.094910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.094917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.100357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.100370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.100379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.100387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.100395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.100403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.100410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.100419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.100427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.100434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.100442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.100456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.100464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.100471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24522d0 is same with the state(6) to be set 00:28:33.258 [2024-12-13 05:44:33.109176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.109191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.109204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.109213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.109224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.109235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.109246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.109255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.109266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.258 [2024-12-13 05:44:33.109275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.258 [2024-12-13 05:44:33.109286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.109772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.109782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1437020 is same with the state(6) to be set 00:28:33.259 [2024-12-13 05:44:33.110207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1467970 (9): Bad file descriptor 00:28:33.259 [2024-12-13 05:44:33.110240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144a960 (9): Bad file descriptor 00:28:33.259 [2024-12-13 05:44:33.110276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.259 [2024-12-13 05:44:33.110287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.110298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.259 [2024-12-13 05:44:33.110307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.110316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.259 [2024-12-13 05:44:33.110325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.110335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.259 [2024-12-13 05:44:33.110343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.110353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1488890 is same with the state(6) to be set 00:28:33.259 [2024-12-13 05:44:33.110373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1488a70 (9): Bad file descriptor 00:28:33.259 [2024-12-13 05:44:33.110388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58610 (9): Bad file descriptor 00:28:33.259 [2024-12-13 05:44:33.110403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1460c00 (9): Bad file descriptor 00:28:33.259 [2024-12-13 05:44:33.110419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102b420 (9): Bad file descriptor 00:28:33.259 [2024-12-13 05:44:33.110432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10330b0 (9): Bad file descriptor 00:28:33.259 [2024-12-13 05:44:33.110460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101d270 (9): Bad file descriptor 00:28:33.259 [2024-12-13 05:44:33.110481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1028c40 (9): Bad file descriptor 00:28:33.259 [2024-12-13 05:44:33.113480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:33.259 [2024-12-13 05:44:33.113823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.113854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.113870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.113879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.113891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.113900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.113911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.113920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.113931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.113940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.113951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.113960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.259 [2024-12-13 05:44:33.113970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.259 [2024-12-13 05:44:33.113979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.113990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.113999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.260 [2024-12-13 05:44:33.114769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.260 [2024-12-13 05:44:33.114780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.114788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.114799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.114808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.114818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.114827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.114838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.114846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.114859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.114868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.114879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.114888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.114898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.114907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.114917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.114925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.114936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.114945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.114955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.114965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.114975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.114984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.114995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.115003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.115014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.115023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.115222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:33.261 [2024-12-13 05:44:33.115424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.261 [2024-12-13 05:44:33.115441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102b420 with addr=10.0.0.2, port=4420 00:28:33.261 [2024-12-13 05:44:33.115460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102b420 is same with the state(6) to be set 00:28:33.261 [2024-12-13 05:44:33.116274] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:33.261 [2024-12-13 05:44:33.117587] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:33.261 [2024-12-13 05:44:33.117612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:33.261 [2024-12-13 05:44:33.117824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.261 [2024-12-13 05:44:33.117841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf58610 with addr=10.0.0.2, port=4420 00:28:33.261 [2024-12-13 05:44:33.117856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58610 is same with the state(6) to be set 00:28:33.261 [2024-12-13 05:44:33.117870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102b420 (9): Bad file descriptor 00:28:33.261 [2024-12-13 05:44:33.117930] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:33.261 [2024-12-13 05:44:33.117982] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:33.261 [2024-12-13 05:44:33.118031] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:33.261 [2024-12-13 05:44:33.118079] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:33.261 [2024-12-13 05:44:33.118174] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:33.261 [2024-12-13 05:44:33.118275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.261 [2024-12-13 05:44:33.118290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144a960 with addr=10.0.0.2, port=4420 00:28:33.261 [2024-12-13 05:44:33.118299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a960 is same with the state(6) to be set 00:28:33.261 [2024-12-13 05:44:33.118310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58610 (9): Bad file descriptor 00:28:33.261 [2024-12-13 05:44:33.118320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:33.261 [2024-12-13 05:44:33.118328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:33.261 [2024-12-13 05:44:33.118337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:33.261 [2024-12-13 05:44:33.118347] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:33.261 [2024-12-13 05:44:33.118684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144a960 (9): Bad file descriptor 00:28:33.261 [2024-12-13 05:44:33.118698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:33.261 [2024-12-13 05:44:33.118705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:33.261 [2024-12-13 05:44:33.118713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:33.261 [2024-12-13 05:44:33.118720] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:33.261 [2024-12-13 05:44:33.118768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:33.261 [2024-12-13 05:44:33.118776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:33.261 [2024-12-13 05:44:33.118784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:33.261 [2024-12-13 05:44:33.118790] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:33.261 [2024-12-13 05:44:33.120206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1488890 (9): Bad file descriptor 00:28:33.261 [2024-12-13 05:44:33.120342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.120369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.120392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.120408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.120425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.120442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.120468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.120485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.120501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.120518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.120535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.120551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.120568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.261 [2024-12-13 05:44:33.120585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.261 [2024-12-13 05:44:33.120592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.120988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.120998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.121005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.121014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.121022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.121031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.121042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.121051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.121058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.121068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.121075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.121085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.121092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.121101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.121109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.121117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.121125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.121135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.121142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.121151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.121159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.121167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.121175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.121184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.121192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.262 [2024-12-13 05:44:33.121201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.262 [2024-12-13 05:44:33.121208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.121443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.121456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102ecf0 is same with the state(6) to be set 00:28:33.263 [2024-12-13 05:44:33.122586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.122985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.122992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.123002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.123010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.263 [2024-12-13 05:44:33.123024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.263 [2024-12-13 05:44:33.123032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.123680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.123688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a3ff0 is same with the state(6) to be set 00:28:33.264 [2024-12-13 05:44:33.124810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.124823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.264 [2024-12-13 05:44:33.124835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.264 [2024-12-13 05:44:33.124844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.124854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.124861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.124871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.124879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.124888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.124896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.124905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.124913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.124922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.124929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.124939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.124946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.124955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.124963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.124972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.124980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.124989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.124996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.265 [2024-12-13 05:44:33.125537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.265 [2024-12-13 05:44:33.125545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.125902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.125910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5300 is same with the state(6) to be set 00:28:33.266 [2024-12-13 05:44:33.127046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.127062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.127075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.127083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.127093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.127100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.127110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.127118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.127128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.127135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.127145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.127152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.127161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.127169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.127179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.127186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.127196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.127203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.127215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.127223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.127232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.127239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.127249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.127256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.266 [2024-12-13 05:44:33.127266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.266 [2024-12-13 05:44:33.127273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.267 [2024-12-13 05:44:33.127969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.267 [2024-12-13 05:44:33.127977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.127986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.127994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.128003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.128011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.128020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.128028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.128037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.128044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.128053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.128061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.128072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.128079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.128088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.128096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.128105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.128112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.128122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.128129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.128139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.128146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.128154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6610 is same with the state(6) to be set 00:28:33.268 [2024-12-13 05:44:33.129216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.268 [2024-12-13 05:44:33.129668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.268 [2024-12-13 05:44:33.129675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.129991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.129998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.130006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.130012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.130020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.130027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.130035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.130042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.130050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.130056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.130064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.130070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.130078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.130087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.130095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.130102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.130109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.130116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.130124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.130130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.130138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.130145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.130153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.130159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.130167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.130173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.130180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a75d0 is same with the state(6) to be set 00:28:33.269 [2024-12-13 05:44:33.131174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.131188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.131198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.131205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.131213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.131220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.131228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.131234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.131243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.131249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.269 [2024-12-13 05:44:33.131257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.269 [2024-12-13 05:44:33.131267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.270 [2024-12-13 05:44:33.131867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.270 [2024-12-13 05:44:33.131873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.131881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.131888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.131896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.131902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.131910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.131916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.131924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.131931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.131938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.131945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.131952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.131959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.131967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.131973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.131981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.131987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.131995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.132003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.132011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.132017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.132025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.132031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.132039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.132046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.132053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16aaf00 is same with the state(6) to be set 00:28:33.271 [2024-12-13 05:44:33.132985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:33.271 [2024-12-13 05:44:33.133002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:33.271 [2024-12-13 05:44:33.133013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:33.271 [2024-12-13 05:44:33.133023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:33.271 [2024-12-13 05:44:33.133095] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:33.271 [2024-12-13 05:44:33.133107] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:33.271 [2024-12-13 05:44:33.133189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:33.271 [2024-12-13 05:44:33.133201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:33.271 [2024-12-13 05:44:33.133501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.271 [2024-12-13 05:44:33.133518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10330b0 with addr=10.0.0.2, port=4420 00:28:33.271 [2024-12-13 05:44:33.133526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10330b0 is same with the state(6) to be set 00:28:33.271 [2024-12-13 05:44:33.133677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.271 [2024-12-13 05:44:33.133688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1028c40 with addr=10.0.0.2, port=4420 00:28:33.271 [2024-12-13 05:44:33.133695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1028c40 is same with the state(6) to be set 00:28:33.271 [2024-12-13 05:44:33.133836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.271 [2024-12-13 05:44:33.133845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101d270 with addr=10.0.0.2, port=4420 00:28:33.271 [2024-12-13 05:44:33.133853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101d270 is same with the state(6) to be set 00:28:33.271 [2024-12-13 05:44:33.134042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.271 [2024-12-13 05:44:33.134053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1467970 with addr=10.0.0.2, port=4420 00:28:33.271 [2024-12-13 05:44:33.134060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1467970 is same with the state(6) to be set 00:28:33.271 [2024-12-13 05:44:33.135170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.271 [2024-12-13 05:44:33.135382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.271 [2024-12-13 05:44:33.135390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.272 [2024-12-13 05:44:33.135979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.272 [2024-12-13 05:44:33.135985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.273 [2024-12-13 05:44:33.135993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.273 [2024-12-13 05:44:33.135999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.273 [2024-12-13 05:44:33.136008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.273 [2024-12-13 05:44:33.136014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.273 [2024-12-13 05:44:33.136022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.273 [2024-12-13 05:44:33.136028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.273 [2024-12-13 05:44:33.136036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.273 [2024-12-13 05:44:33.136043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.273 [2024-12-13 05:44:33.136051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.273 [2024-12-13 05:44:33.136058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.273 [2024-12-13 05:44:33.136066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.273 [2024-12-13 05:44:33.136073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.273 [2024-12-13 05:44:33.136080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.273 [2024-12-13 05:44:33.136087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.273 [2024-12-13 05:44:33.136095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.273 [2024-12-13 05:44:33.136103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.273 [2024-12-13 05:44:33.136111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.273 [2024-12-13 05:44:33.136117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.273 [2024-12-13 05:44:33.136124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a9bd0 is same with the state(6) to be set 00:28:33.273 [2024-12-13 05:44:33.137291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:33.273 [2024-12-13 05:44:33.137307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:33.273 [2024-12-13 05:44:33.137316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:33.273 task offset: 24832 on job bdev=Nvme1n1 fails 00:28:33.273 00:28:33.273 Latency(us) 00:28:33.273 [2024-12-13T04:44:33.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.273 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.273 Job: Nvme1n1 ended in about 0.86 seconds with error 00:28:33.273 Verification LBA range: start 0x0 length 0x400 00:28:33.273 Nvme1n1 : 0.86 222.70 13.92 74.23 0.00 213185.83 17601.10 215707.06 00:28:33.273 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.273 Job: Nvme2n1 ended in about 0.87 seconds with error 00:28:33.273 Verification LBA range: start 0x0 length 0x400 00:28:33.273 Nvme2n1 : 0.87 225.64 14.10 73.30 0.00 208042.66 15978.30 206719.27 00:28:33.273 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.273 Job: Nvme3n1 ended in about 0.88 seconds with error 00:28:33.273 Verification LBA range: start 0x0 length 0x400 00:28:33.273 Nvme3n1 : 0.88 245.64 15.35 73.12 0.00 191475.97 12170.97 212711.13 00:28:33.273 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.273 Job: Nvme4n1 ended in about 0.88 seconds with error 00:28:33.273 Verification LBA range: start 0x0 length 0x400 00:28:33.273 Nvme4n1 : 0.88 218.80 13.68 72.93 0.00 205412.57 15291.73 212711.13 00:28:33.273 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.273 Job: Nvme5n1 ended in about 0.88 seconds with error 00:28:33.273 Verification LBA range: start 0x0 length 0x400 00:28:33.273 Nvme5n1 : 0.88 218.25 13.64 72.75 0.00 202130.04 18474.91 217704.35 00:28:33.273 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.273 Job: Nvme6n1 ended in about 0.86 seconds with error 00:28:33.273 Verification LBA range: start 0x0 length 0x400 00:28:33.273 Nvme6n1 : 0.86 222.35 13.90 74.12 0.00 194201.60 17725.93 215707.06 00:28:33.273 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.273 Job: Nvme7n1 ended in about 0.88 seconds with error 00:28:33.273 Verification LBA range: start 0x0 length 0x400 00:28:33.273 Nvme7n1 : 0.88 217.75 13.61 72.58 0.00 194938.39 17601.10 214708.42 00:28:33.273 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.273 Job: Nvme8n1 ended in about 0.87 seconds with error 00:28:33.273 Verification LBA range: start 0x0 length 0x400 00:28:33.273 Nvme8n1 : 0.87 225.82 14.11 69.13 0.00 187526.13 3308.01 199728.76 00:28:33.273 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.273 Job: Nvme9n1 ended in about 0.89 seconds with error 00:28:33.273 Verification LBA range: start 0x0 length 0x400 00:28:33.273 Nvme9n1 : 0.89 144.20 9.01 72.10 0.00 251642.31 18474.91 223696.21 00:28:33.273 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.273 Job: Nvme10n1 ended in about 0.88 seconds with error 00:28:33.273 Verification LBA range: start 0x0 length 0x400 00:28:33.273 Nvme10n1 : 0.88 150.52 9.41 66.77 0.00 244757.13 17601.10 232684.01 00:28:33.273 [2024-12-13T04:44:33.288Z] =================================================================================================================== 00:28:33.273 [2024-12-13T04:44:33.288Z] Total : 2091.68 130.73 721.04 0.00 207139.28 3308.01 232684.01 00:28:33.273 [2024-12-13 05:44:33.169400] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:33.273 [2024-12-13 05:44:33.169471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:33.273 [2024-12-13 05:44:33.169808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-13 05:44:33.169825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1460c00 with addr=10.0.0.2, port=4420 00:28:33.273 [2024-12-13 05:44:33.169835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1460c00 is same with the state(6) to be set 00:28:33.273 [2024-12-13 05:44:33.169934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-13 05:44:33.169944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1488a70 with addr=10.0.0.2, port=4420 00:28:33.273 [2024-12-13 05:44:33.169951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1488a70 is same with the state(6) to be set 00:28:33.273 [2024-12-13 05:44:33.169964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10330b0 (9): Bad file descriptor 00:28:33.273 [2024-12-13 05:44:33.169976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1028c40 (9): Bad file descriptor 00:28:33.273 [2024-12-13 05:44:33.169985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101d270 (9): Bad file descriptor 00:28:33.273 [2024-12-13 05:44:33.169993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1467970 (9): Bad file descriptor 00:28:33.273 [2024-12-13 05:44:33.170345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-13 05:44:33.170360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102b420 with addr=10.0.0.2, port=4420 00:28:33.273 [2024-12-13 05:44:33.170367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102b420 is same with the state(6) to be set 00:28:33.273 [2024-12-13 05:44:33.170507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-13 05:44:33.170517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf58610 with addr=10.0.0.2, port=4420 00:28:33.273 [2024-12-13 05:44:33.170525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58610 is same with the state(6) to be set 00:28:33.273 [2024-12-13 05:44:33.170612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-13 05:44:33.170622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144a960 with addr=10.0.0.2, port=4420 00:28:33.273 [2024-12-13 05:44:33.170629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a960 is same with the state(6) to be set 00:28:33.273 [2024-12-13 05:44:33.170795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-13 05:44:33.170805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1488890 with addr=10.0.0.2, port=4420 00:28:33.273 [2024-12-13 05:44:33.170812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1488890 is same with the state(6) to be set 00:28:33.273 [2024-12-13 05:44:33.170821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1460c00 (9): Bad file descriptor 00:28:33.273 [2024-12-13 05:44:33.170830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1488a70 (9): Bad file descriptor 00:28:33.273 [2024-12-13 05:44:33.170838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:33.273 [2024-12-13 05:44:33.170845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:33.273 [2024-12-13 05:44:33.170858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:33.273 [2024-12-13 05:44:33.170867] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:33.273 [2024-12-13 05:44:33.170875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:33.273 [2024-12-13 05:44:33.170881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:33.273 [2024-12-13 05:44:33.170887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:33.273 [2024-12-13 05:44:33.170892] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:33.273 [2024-12-13 05:44:33.170899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:33.273 [2024-12-13 05:44:33.170904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:33.273 [2024-12-13 05:44:33.170911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:33.273 [2024-12-13 05:44:33.170917] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:33.273 [2024-12-13 05:44:33.170924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:33.274 [2024-12-13 05:44:33.170930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:33.274 [2024-12-13 05:44:33.170936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:33.274 [2024-12-13 05:44:33.170942] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:33.274 [2024-12-13 05:44:33.170985] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:33.274 [2024-12-13 05:44:33.170997] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:33.274 [2024-12-13 05:44:33.171306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102b420 (9): Bad file descriptor 00:28:33.274 [2024-12-13 05:44:33.171319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58610 (9): Bad file descriptor 00:28:33.274 [2024-12-13 05:44:33.171327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144a960 (9): Bad file descriptor 00:28:33.274 [2024-12-13 05:44:33.171336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1488890 (9): Bad file descriptor 00:28:33.274 [2024-12-13 05:44:33.171343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:33.274 [2024-12-13 05:44:33.171349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:33.274 [2024-12-13 05:44:33.171355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:33.274 [2024-12-13 05:44:33.171361] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:33.274 [2024-12-13 05:44:33.171367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:33.274 [2024-12-13 05:44:33.171373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:33.274 [2024-12-13 05:44:33.171379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:33.274 [2024-12-13 05:44:33.171385] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:33.274 [2024-12-13 05:44:33.171625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:33.274 [2024-12-13 05:44:33.171640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:33.274 [2024-12-13 05:44:33.171649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:33.274 [2024-12-13 05:44:33.171656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:33.274 [2024-12-13 05:44:33.171685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:33.274 [2024-12-13 05:44:33.171691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:33.274 [2024-12-13 05:44:33.171698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:33.274 [2024-12-13 05:44:33.171703] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:33.274 [2024-12-13 05:44:33.171710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:33.274 [2024-12-13 05:44:33.171716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:33.274 [2024-12-13 05:44:33.171722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:33.274 [2024-12-13 05:44:33.171727] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:33.274 [2024-12-13 05:44:33.171734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:33.274 [2024-12-13 05:44:33.171739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:33.274 [2024-12-13 05:44:33.171745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:33.274 [2024-12-13 05:44:33.171751] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:33.274 [2024-12-13 05:44:33.171758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:33.274 [2024-12-13 05:44:33.171763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:33.274 [2024-12-13 05:44:33.171769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:33.274 [2024-12-13 05:44:33.171774] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:33.274 [2024-12-13 05:44:33.171987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-13 05:44:33.172000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1467970 with addr=10.0.0.2, port=4420 00:28:33.274 [2024-12-13 05:44:33.172007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1467970 is same with the state(6) to be set 00:28:33.274 [2024-12-13 05:44:33.172220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-13 05:44:33.172230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101d270 with addr=10.0.0.2, port=4420 00:28:33.274 [2024-12-13 05:44:33.172237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101d270 is same with the state(6) to be set 00:28:33.274 [2024-12-13 05:44:33.172428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-13 05:44:33.172438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1028c40 with addr=10.0.0.2, port=4420 00:28:33.274 [2024-12-13 05:44:33.172445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1028c40 is same with the state(6) to be set 00:28:33.274 [2024-12-13 05:44:33.172582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-13 05:44:33.172595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10330b0 with addr=10.0.0.2, port=4420 00:28:33.274 [2024-12-13 05:44:33.172602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10330b0 is same with the state(6) to be set 00:28:33.274 [2024-12-13 05:44:33.172632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1467970 (9): Bad file descriptor 00:28:33.274 [2024-12-13 05:44:33.172643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101d270 (9): Bad file descriptor 00:28:33.274 [2024-12-13 05:44:33.172651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1028c40 (9): Bad file descriptor 00:28:33.274 [2024-12-13 05:44:33.172659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10330b0 (9): Bad file descriptor 00:28:33.274 [2024-12-13 05:44:33.172683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:33.274 [2024-12-13 05:44:33.172690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:33.274 [2024-12-13 05:44:33.172696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:33.274 [2024-12-13 05:44:33.172702] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:33.274 [2024-12-13 05:44:33.172709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:33.274 [2024-12-13 05:44:33.172714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:33.274 [2024-12-13 05:44:33.172720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:33.274 [2024-12-13 05:44:33.172726] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:33.274 [2024-12-13 05:44:33.172732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:33.274 [2024-12-13 05:44:33.172738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:33.274 [2024-12-13 05:44:33.172744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:33.274 [2024-12-13 05:44:33.172750] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:33.274 [2024-12-13 05:44:33.172756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:33.274 [2024-12-13 05:44:33.172761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:33.274 [2024-12-13 05:44:33.172767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:33.274 [2024-12-13 05:44:33.172773] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:33.533 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:34.470 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 428237 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 428237 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 428237 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:34.471 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:34.730 rmmod nvme_tcp 00:28:34.730 rmmod nvme_fabrics 00:28:34.730 rmmod nvme_keyring 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 428063 ']' 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 428063 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 428063 ']' 00:28:34.730 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 428063 00:28:34.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (428063) - No such process 00:28:34.731 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 428063 is not found' 00:28:34.731 Process with pid 428063 is not found 00:28:34.731 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:34.731 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:34.731 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:34.731 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:34.731 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:34.731 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:34.731 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:34.731 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:34.731 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:34.731 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.731 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.731 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.634 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:36.634 00:28:36.634 real 0m7.073s 00:28:36.634 user 0m16.212s 00:28:36.634 sys 0m1.299s 00:28:36.634 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.634 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:36.635 ************************************ 00:28:36.635 END TEST nvmf_shutdown_tc3 00:28:36.635 ************************************ 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:36.895 ************************************ 00:28:36.895 START TEST nvmf_shutdown_tc4 00:28:36.895 ************************************ 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:36.895 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:36.895 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.895 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:36.896 Found net devices under 0000:af:00.0: cvl_0_0 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:36.896 Found net devices under 0000:af:00.1: cvl_0_1 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.896 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:37.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:28:37.155 00:28:37.155 --- 10.0.0.2 ping statistics --- 00:28:37.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.155 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:28:37.155 00:28:37.155 --- 10.0.0.1 ping statistics --- 00:28:37.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.155 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:37.155 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:37.155 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:37.155 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:37.155 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:37.155 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:37.155 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=429365 00:28:37.155 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:37.155 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 429365 00:28:37.155 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 429365 ']' 00:28:37.155 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.155 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.155 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.155 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.155 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:37.155 [2024-12-13 05:44:37.084865] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:37.155 [2024-12-13 05:44:37.084916] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.155 [2024-12-13 05:44:37.161711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:37.415 [2024-12-13 05:44:37.185069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.415 [2024-12-13 05:44:37.185109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.415 [2024-12-13 05:44:37.185119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.415 [2024-12-13 05:44:37.185125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.415 [2024-12-13 05:44:37.185130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.415 [2024-12-13 05:44:37.186504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.415 [2024-12-13 05:44:37.186614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.415 [2024-12-13 05:44:37.186721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.415 [2024-12-13 05:44:37.186721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:37.415 [2024-12-13 05:44:37.330490] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.415 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:37.415 Malloc1 00:28:37.674 [2024-12-13 05:44:37.440148] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.674 Malloc2 00:28:37.674 Malloc3 00:28:37.674 Malloc4 00:28:37.674 Malloc5 00:28:37.674 Malloc6 00:28:37.674 Malloc7 00:28:37.933 Malloc8 00:28:37.933 Malloc9 00:28:37.933 Malloc10 00:28:37.933 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.933 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:37.933 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.933 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:37.933 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=429525 00:28:37.933 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:37.933 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:37.933 [2024-12-13 05:44:37.933056] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:43.211 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:43.211 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 429365 00:28:43.211 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 429365 ']' 00:28:43.211 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 429365 00:28:43.211 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:43.211 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.211 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 429365 00:28:43.211 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:43.211 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:43.211 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 429365' 00:28:43.211 killing process with pid 429365 00:28:43.211 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 429365 00:28:43.211 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 429365 00:28:43.211 [2024-12-13 05:44:42.945327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee900 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.945381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee900 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.945390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee900 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.945396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee900 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.945403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee900 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.945856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eedd0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.945888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eedd0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.945896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eedd0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef2a0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef2a0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef2a0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef2a0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef2a0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef2a0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef2a0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef2a0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef2a0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.946981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee430 is same with the state(6) to be set 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 [2024-12-13 05:44:42.949334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 [2024-12-13 05:44:42.949531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6012d0 is same with the state(6) to be set 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 [2024-12-13 05:44:42.949552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6012d0 is same with the state(6) to be set 00:28:43.212 starting I/O failed: -6 00:28:43.212 [2024-12-13 05:44:42.949559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6012d0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.949571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6012d0 is same with tWrite completed with error (sct=0, sc=8) 00:28:43.212 he state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.949579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6012d0 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.949585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6012d0 is same with tWrite completed with error (sct=0, sc=8) 00:28:43.212 he state(6) to be set 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 starting I/O failed: -6 00:28:43.212 Write completed with error (sct=0, sc=8) 00:28:43.212 [2024-12-13 05:44:42.950123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.212 [2024-12-13 05:44:42.950140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x601c70 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.950160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x601c70 is same with the state(6) to be set 00:28:43.212 [2024-12-13 05:44:42.950167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x601c70 is same with the state(6) to be set 00:28:43.213 [2024-12-13 05:44:42.950174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x601c70 is same with the state(6) to be set 00:28:43.213 [2024-12-13 05:44:42.950180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x601c70 is same with the state(6) to be set 00:28:43.213 [2024-12-13 05:44:42.950186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x601c70 is same with the state(6) to be set 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 [2024-12-13 05:44:42.950909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc40 is same with the state(6) to be set 00:28:43.213 starting I/O failed: -6 00:28:43.213 [2024-12-13 05:44:42.950921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc40 is same with the state(6) to be set 00:28:43.213 [2024-12-13 05:44:42.950928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc40 is same with the state(6) to be set 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 [2024-12-13 05:44:42.950934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc40 is same with the state(6) to be set 00:28:43.213 starting I/O failed: -6 00:28:43.213 [2024-12-13 05:44:42.950941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc40 is same with the state(6) to be set 00:28:43.213 [2024-12-13 05:44:42.950948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc40 is same with the state(6) to be set 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 [2024-12-13 05:44:42.951155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:43.213 [2024-12-13 05:44:42.951223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0110 is same with the state(6) to be set 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 [2024-12-13 05:44:42.951243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0110 is same with the state(6) to be set 00:28:43.213 starting I/O failed: -6 00:28:43.213 [2024-12-13 05:44:42.951250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0110 is same with the state(6) to be set 00:28:43.213 [2024-12-13 05:44:42.951257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0110 is same with the state(6) to be set 00:28:43.213 [2024-12-13 05:44:42.951263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0110 is same with tWrite completed with error (sct=0, sc=8) 00:28:43.213 he state(6) to be set 00:28:43.213 starting I/O failed: -6 00:28:43.213 [2024-12-13 05:44:42.951275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0110 is same with the state(6) to be set 00:28:43.213 [2024-12-13 05:44:42.951281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0110 is same with the state(6) to be set 00:28:43.213 [2024-12-13 05:44:42.951287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0110 is same with the state(6) to be set 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 [2024-12-13 05:44:42.951548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f05e0 is same with the state(6) to be set 00:28:43.213 [2024-12-13 05:44:42.951561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f05e0 is same with the state(6) to be set 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 [2024-12-13 05:44:42.951567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f05e0 is same with the state(6) to be set 00:28:43.213 starting I/O failed: -6 00:28:43.213 [2024-12-13 05:44:42.951574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f05e0 is same with the state(6) to be set 00:28:43.213 [2024-12-13 05:44:42.951580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f05e0 is same with the state(6) to be set 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 starting I/O failed: -6 00:28:43.213 [2024-12-13 05:44:42.951865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef770 is same with the state(6) to be set 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 [2024-12-13 05:44:42.951885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef770 is same with tstarting I/O failed: -6 00:28:43.213 he state(6) to be set 00:28:43.213 [2024-12-13 05:44:42.951894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef770 is same with the state(6) to be set 00:28:43.213 [2024-12-13 05:44:42.951901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef770 is same with the state(6) to be set 00:28:43.213 Write completed with error (sct=0, sc=8) 00:28:43.213 [2024-12-13 05:44:42.951907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef770 is same with tstarting I/O failed: -6 00:28:43.213 he state(6) to be set 00:28:43.214 [2024-12-13 05:44:42.951917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef770 is same with the state(6) to be set 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 [2024-12-13 05:44:42.952793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ed5a0 is same with the state(6) to be set 00:28:43.214 [2024-12-13 05:44:42.952806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ed5a0 is same with the state(6) to be set 00:28:43.214 [2024-12-13 05:44:42.952812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ed5a0 is same with the state(6) to be set 00:28:43.214 [2024-12-13 05:44:42.952818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ed5a0 is same with the state(6) to be set 00:28:43.214 [2024-12-13 05:44:42.952825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ed5a0 is same with the state(6) to be set 00:28:43.214 [2024-12-13 05:44:42.952926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:43.214 NVMe io qpair process completion error 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 [2024-12-13 05:44:42.953474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7edf60 is same with the state(6) to be set 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 [2024-12-13 05:44:42.953487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7edf60 is same with the state(6) to be set 00:28:43.214 [2024-12-13 05:44:42.953494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7edf60 is same with tWrite completed with error (sct=0, sc=8) 00:28:43.214 he state(6) to be set 00:28:43.214 [2024-12-13 05:44:42.953502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7edf60 is same with the state(6) to be set 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 [2024-12-13 05:44:42.953509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7edf60 is same with the state(6) to be set 00:28:43.214 [2024-12-13 05:44:42.953520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7edf60 is same with the state(6) to be set 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 [2024-12-13 05:44:42.953526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7edf60 is same with the state(6) to be set 00:28:43.214 starting I/O failed: -6 00:28:43.214 [2024-12-13 05:44:42.953532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7edf60 is same with the state(6) to be set 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 [2024-12-13 05:44:42.953948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.214 starting I/O failed: -6 00:28:43.214 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 [2024-12-13 05:44:42.954827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 [2024-12-13 05:44:42.955821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 [2024-12-13 05:44:42.957292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:43.215 NVMe io qpair process completion error 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.215 starting I/O failed: -6 00:28:43.215 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 [2024-12-13 05:44:42.958179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 [2024-12-13 05:44:42.959065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.216 starting I/O failed: -6 00:28:43.216 Write completed with error (sct=0, sc=8) 00:28:43.217 [2024-12-13 05:44:42.960062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 [2024-12-13 05:44:42.962056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.217 NVMe io qpair process completion error 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 [2024-12-13 05:44:42.963146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.217 starting I/O failed: -6 00:28:43.217 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 [2024-12-13 05:44:42.964059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 [2024-12-13 05:44:42.965048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 [2024-12-13 05:44:42.967000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.218 NVMe io qpair process completion error 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 Write completed with error (sct=0, sc=8) 00:28:43.218 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 [2024-12-13 05:44:42.968042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 [2024-12-13 05:44:42.968913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 [2024-12-13 05:44:42.969937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.219 Write completed with error (sct=0, sc=8) 00:28:43.219 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 [2024-12-13 05:44:42.973608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.220 NVMe io qpair process completion error 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 [2024-12-13 05:44:42.974632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 [2024-12-13 05:44:42.975425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.220 starting I/O failed: -6 00:28:43.220 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 [2024-12-13 05:44:42.976471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 Write completed with error (sct=0, sc=8) 00:28:43.221 starting I/O failed: -6 00:28:43.221 [2024-12-13 05:44:42.978291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.221 NVMe io qpair process completion error 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 [2024-12-13 05:44:42.979202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 [2024-12-13 05:44:42.980068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 [2024-12-13 05:44:42.981132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.222 starting I/O failed: -6 00:28:43.222 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 [2024-12-13 05:44:42.983195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.223 NVMe io qpair process completion error 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 [2024-12-13 05:44:42.984224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 [2024-12-13 05:44:42.985108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.223 Write completed with error (sct=0, sc=8) 00:28:43.223 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 [2024-12-13 05:44:42.986089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 [2024-12-13 05:44:42.993312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.224 NVMe io qpair process completion error 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 starting I/O failed: -6 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.224 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.225 Write completed with error (sct=0, sc=8) 00:28:43.225 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 [2024-12-13 05:44:42.999080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:43.226 NVMe io qpair process completion error 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 [2024-12-13 05:44:43.000239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 [2024-12-13 05:44:43.001214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.226 Write completed with error (sct=0, sc=8) 00:28:43.226 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 [2024-12-13 05:44:43.002287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 Write completed with error (sct=0, sc=8) 00:28:43.227 starting I/O failed: -6 00:28:43.227 [2024-12-13 05:44:43.004826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:43.227 NVMe io qpair process completion error 00:28:43.227 Initializing NVMe Controllers 00:28:43.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:43.227 Controller IO queue size 128, less than required. 00:28:43.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:43.227 Controller IO queue size 128, less than required. 00:28:43.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:43.227 Controller IO queue size 128, less than required. 00:28:43.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:43.227 Controller IO queue size 128, less than required. 00:28:43.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:43.227 Controller IO queue size 128, less than required. 00:28:43.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:43.227 Controller IO queue size 128, less than required. 00:28:43.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:43.227 Controller IO queue size 128, less than required. 00:28:43.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:43.227 Controller IO queue size 128, less than required. 00:28:43.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:43.227 Controller IO queue size 128, less than required. 00:28:43.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:43.227 Controller IO queue size 128, less than required. 00:28:43.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:43.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:43.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:43.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:43.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:43.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:43.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:43.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:43.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:43.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:43.228 Initialization complete. Launching workers. 00:28:43.228 ======================================================== 00:28:43.228 Latency(us) 00:28:43.228 Device Information : IOPS MiB/s Average min max 00:28:43.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2228.54 95.76 57440.82 920.34 107631.10 00:28:43.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2214.46 95.15 57814.61 925.03 106722.17 00:28:43.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2206.99 94.83 58028.07 844.76 104976.69 00:28:43.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2217.87 95.30 57758.09 819.68 102762.08 00:28:43.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2192.69 94.22 58457.86 787.82 106137.00 00:28:43.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2194.19 94.28 58430.91 821.53 100493.52 00:28:43.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2190.13 94.11 58555.76 919.22 99648.02 00:28:43.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2207.63 94.86 58165.79 869.00 119055.76 00:28:43.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2243.48 96.40 57271.86 1026.45 100937.92 00:28:43.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2225.56 95.63 57062.13 845.13 98921.21 00:28:43.228 ======================================================== 00:28:43.228 Total : 22121.55 950.54 57895.15 787.82 119055.76 00:28:43.228 00:28:43.228 [2024-12-13 05:44:43.007878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9073a0 is same with the state(6) to be set 00:28:43.228 [2024-12-13 05:44:43.007925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9076d0 is same with the state(6) to be set 00:28:43.228 [2024-12-13 05:44:43.007955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998f00 is same with the state(6) to be set 00:28:43.228 [2024-12-13 05:44:43.007983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x907d30 is same with the state(6) to be set 00:28:43.228 [2024-12-13 05:44:43.008011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993ff0 is same with the state(6) to be set 00:28:43.228 [2024-12-13 05:44:43.008038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x908060 is same with the state(6) to be set 00:28:43.228 [2024-12-13 05:44:43.008065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9086c0 is same with the state(6) to be set 00:28:43.228 [2024-12-13 05:44:43.008092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x907a00 is same with the state(6) to be set 00:28:43.228 [2024-12-13 05:44:43.008118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x908390 is same with the state(6) to be set 00:28:43.228 [2024-12-13 05:44:43.008147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x907070 is same with the state(6) to be set 00:28:43.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:43.487 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 429525 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 429525 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 429525 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:44.424 rmmod nvme_tcp 00:28:44.424 rmmod nvme_fabrics 00:28:44.424 rmmod nvme_keyring 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 429365 ']' 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 429365 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 429365 ']' 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 429365 00:28:44.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (429365) - No such process 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 429365 is not found' 00:28:44.424 Process with pid 429365 is not found 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:44.424 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:44.425 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:44.425 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:28:44.425 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:44.425 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:28:44.425 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:44.425 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:44.425 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.425 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.425 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:46.960 00:28:46.960 real 0m9.772s 00:28:46.960 user 0m24.982s 00:28:46.960 sys 0m5.093s 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:46.960 ************************************ 00:28:46.960 END TEST nvmf_shutdown_tc4 00:28:46.960 ************************************ 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:46.960 00:28:46.960 real 0m39.572s 00:28:46.960 user 1m35.687s 00:28:46.960 sys 0m13.757s 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:46.960 ************************************ 00:28:46.960 END TEST nvmf_shutdown 00:28:46.960 ************************************ 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:46.960 ************************************ 00:28:46.960 START TEST nvmf_nsid 00:28:46.960 ************************************ 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:46.960 * Looking for test storage... 00:28:46.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:46.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.960 --rc genhtml_branch_coverage=1 00:28:46.960 --rc genhtml_function_coverage=1 00:28:46.960 --rc genhtml_legend=1 00:28:46.960 --rc geninfo_all_blocks=1 00:28:46.960 --rc geninfo_unexecuted_blocks=1 00:28:46.960 00:28:46.960 ' 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:46.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.960 --rc genhtml_branch_coverage=1 00:28:46.960 --rc genhtml_function_coverage=1 00:28:46.960 --rc genhtml_legend=1 00:28:46.960 --rc geninfo_all_blocks=1 00:28:46.960 --rc geninfo_unexecuted_blocks=1 00:28:46.960 00:28:46.960 ' 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:46.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.960 --rc genhtml_branch_coverage=1 00:28:46.960 --rc genhtml_function_coverage=1 00:28:46.960 --rc genhtml_legend=1 00:28:46.960 --rc geninfo_all_blocks=1 00:28:46.960 --rc geninfo_unexecuted_blocks=1 00:28:46.960 00:28:46.960 ' 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:46.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.960 --rc genhtml_branch_coverage=1 00:28:46.960 --rc genhtml_function_coverage=1 00:28:46.960 --rc genhtml_legend=1 00:28:46.960 --rc geninfo_all_blocks=1 00:28:46.960 --rc geninfo_unexecuted_blocks=1 00:28:46.960 00:28:46.960 ' 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.960 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:46.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:28:46.961 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:53.532 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:53.532 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:53.532 Found net devices under 0000:af:00.0: cvl_0_0 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:53.532 Found net devices under 0000:af:00.1: cvl_0_1 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:53.532 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:53.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:28:53.532 00:28:53.532 --- 10.0.0.2 ping statistics --- 00:28:53.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.533 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:28:53.533 00:28:53.533 --- 10.0.0.1 ping statistics --- 00:28:53.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.533 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=433894 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 433894 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 433894 ']' 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:53.533 [2024-12-13 05:44:52.661822] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:53.533 [2024-12-13 05:44:52.661875] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.533 [2024-12-13 05:44:52.740871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.533 [2024-12-13 05:44:52.763078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.533 [2024-12-13 05:44:52.763115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.533 [2024-12-13 05:44:52.763121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.533 [2024-12-13 05:44:52.763127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.533 [2024-12-13 05:44:52.763132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.533 [2024-12-13 05:44:52.763627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=434046 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=95e48054-b854-4ffa-b460-f1606bd26124 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=57b00d13-219e-4e5b-9fa6-1425660ce429 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=cc4bf5e7-8316-46af-9728-308eba74879d 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.533 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:53.533 null0 00:28:53.533 null1 00:28:53.533 [2024-12-13 05:44:52.942763] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:53.533 [2024-12-13 05:44:52.942807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434046 ] 00:28:53.533 null2 00:28:53.533 [2024-12-13 05:44:52.947207] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.533 [2024-12-13 05:44:52.971400] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.533 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.533 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 434046 /var/tmp/tgt2.sock 00:28:53.533 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 434046 ']' 00:28:53.533 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:28:53.533 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.533 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:28:53.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:28:53.533 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.533 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:53.533 [2024-12-13 05:44:53.014920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.533 [2024-12-13 05:44:53.037373] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.533 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.533 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:53.533 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:28:53.533 [2024-12-13 05:44:53.544061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.792 [2024-12-13 05:44:53.560145] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:28:53.792 nvme0n1 nvme0n2 00:28:53.792 nvme1n1 00:28:53.792 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:28:53.792 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:28:53.792 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:54.729 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:28:54.729 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:28:54.729 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:28:54.729 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:28:54.729 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:28:54.729 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:28:54.729 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:28:54.729 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:54.729 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:54.729 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:54.729 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:28:54.729 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:28:54.729 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 95e48054-b854-4ffa-b460-f1606bd26124 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=95e48054b8544ffab460f1606bd26124 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 95E48054B8544FFAB460F1606BD26124 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 95E48054B8544FFAB460F1606BD26124 == \9\5\E\4\8\0\5\4\B\8\5\4\4\F\F\A\B\4\6\0\F\1\6\0\6\B\D\2\6\1\2\4 ]] 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 57b00d13-219e-4e5b-9fa6-1425660ce429 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=57b00d13219e4e5b9fa61425660ce429 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 57B00D13219E4E5B9FA61425660CE429 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 57B00D13219E4E5B9FA61425660CE429 == \5\7\B\0\0\D\1\3\2\1\9\E\4\E\5\B\9\F\A\6\1\4\2\5\6\6\0\C\E\4\2\9 ]] 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid cc4bf5e7-8316-46af-9728-308eba74879d 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=cc4bf5e7831646af9728308eba74879d 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CC4BF5E7831646AF9728308EBA74879D 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ CC4BF5E7831646AF9728308EBA74879D == \C\C\4\B\F\5\E\7\8\3\1\6\4\6\A\F\9\7\2\8\3\0\8\E\B\A\7\4\8\7\9\D ]] 00:28:56.106 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:28:56.106 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:56.106 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:28:56.106 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 434046 00:28:56.106 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 434046 ']' 00:28:56.106 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 434046 00:28:56.106 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:56.106 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.106 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434046 00:28:56.365 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:56.365 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:56.365 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434046' 00:28:56.365 killing process with pid 434046 00:28:56.365 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 434046 00:28:56.365 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 434046 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.624 rmmod nvme_tcp 00:28:56.624 rmmod nvme_fabrics 00:28:56.624 rmmod nvme_keyring 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 433894 ']' 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 433894 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 433894 ']' 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 433894 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 433894 00:28:56.624 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:56.625 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:56.625 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 433894' 00:28:56.625 killing process with pid 433894 00:28:56.625 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 433894 00:28:56.625 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 433894 00:28:56.884 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:56.884 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:56.884 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:56.884 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:28:56.884 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:28:56.884 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:56.884 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:28:56.884 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:56.884 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:56.884 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.884 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.884 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.790 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.049 00:28:59.049 real 0m12.215s 00:28:59.049 user 0m9.601s 00:28:59.049 sys 0m5.306s 00:28:59.049 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.049 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:59.049 ************************************ 00:28:59.049 END TEST nvmf_nsid 00:28:59.049 ************************************ 00:28:59.049 05:44:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:59.049 00:28:59.049 real 18m34.012s 00:28:59.049 user 49m13.678s 00:28:59.049 sys 4m31.336s 00:28:59.049 05:44:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.049 05:44:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:59.049 ************************************ 00:28:59.049 END TEST nvmf_target_extra 00:28:59.049 ************************************ 00:28:59.049 05:44:58 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:59.049 05:44:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:59.049 05:44:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.049 05:44:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:59.049 ************************************ 00:28:59.049 START TEST nvmf_host 00:28:59.049 ************************************ 00:28:59.049 05:44:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:59.049 * Looking for test storage... 00:28:59.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:59.049 05:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:59.049 05:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:28:59.049 05:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:59.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.309 --rc genhtml_branch_coverage=1 00:28:59.309 --rc genhtml_function_coverage=1 00:28:59.309 --rc genhtml_legend=1 00:28:59.309 --rc geninfo_all_blocks=1 00:28:59.309 --rc geninfo_unexecuted_blocks=1 00:28:59.309 00:28:59.309 ' 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:59.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.309 --rc genhtml_branch_coverage=1 00:28:59.309 --rc genhtml_function_coverage=1 00:28:59.309 --rc genhtml_legend=1 00:28:59.309 --rc geninfo_all_blocks=1 00:28:59.309 --rc geninfo_unexecuted_blocks=1 00:28:59.309 00:28:59.309 ' 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:59.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.309 --rc genhtml_branch_coverage=1 00:28:59.309 --rc genhtml_function_coverage=1 00:28:59.309 --rc genhtml_legend=1 00:28:59.309 --rc geninfo_all_blocks=1 00:28:59.309 --rc geninfo_unexecuted_blocks=1 00:28:59.309 00:28:59.309 ' 00:28:59.309 05:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:59.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.309 --rc genhtml_branch_coverage=1 00:28:59.309 --rc genhtml_function_coverage=1 00:28:59.309 --rc genhtml_legend=1 00:28:59.309 --rc geninfo_all_blocks=1 00:28:59.309 --rc geninfo_unexecuted_blocks=1 00:28:59.309 00:28:59.309 ' 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.310 ************************************ 00:28:59.310 START TEST nvmf_multicontroller 00:28:59.310 ************************************ 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:59.310 * Looking for test storage... 00:28:59.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:59.310 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:59.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.570 --rc genhtml_branch_coverage=1 00:28:59.570 --rc genhtml_function_coverage=1 00:28:59.570 --rc genhtml_legend=1 00:28:59.570 --rc geninfo_all_blocks=1 00:28:59.570 --rc geninfo_unexecuted_blocks=1 00:28:59.570 00:28:59.570 ' 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:59.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.570 --rc genhtml_branch_coverage=1 00:28:59.570 --rc genhtml_function_coverage=1 00:28:59.570 --rc genhtml_legend=1 00:28:59.570 --rc geninfo_all_blocks=1 00:28:59.570 --rc geninfo_unexecuted_blocks=1 00:28:59.570 00:28:59.570 ' 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:59.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.570 --rc genhtml_branch_coverage=1 00:28:59.570 --rc genhtml_function_coverage=1 00:28:59.570 --rc genhtml_legend=1 00:28:59.570 --rc geninfo_all_blocks=1 00:28:59.570 --rc geninfo_unexecuted_blocks=1 00:28:59.570 00:28:59.570 ' 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:59.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.570 --rc genhtml_branch_coverage=1 00:28:59.570 --rc genhtml_function_coverage=1 00:28:59.570 --rc genhtml_legend=1 00:28:59.570 --rc geninfo_all_blocks=1 00:28:59.570 --rc geninfo_unexecuted_blocks=1 00:28:59.570 00:28:59.570 ' 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.570 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.571 05:44:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:06.144 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:06.144 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:06.144 Found net devices under 0000:af:00.0: cvl_0_0 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:06.144 Found net devices under 0000:af:00.1: cvl_0_1 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.144 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.145 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.145 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.145 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.145 05:45:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:29:06.145 00:29:06.145 --- 10.0.0.2 ping statistics --- 00:29:06.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.145 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:29:06.145 00:29:06.145 --- 10.0.0.1 ping statistics --- 00:29:06.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.145 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=438273 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 438273 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 438273 ']' 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.145 [2024-12-13 05:45:05.258905] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:06.145 [2024-12-13 05:45:05.258953] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.145 [2024-12-13 05:45:05.338390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:06.145 [2024-12-13 05:45:05.361576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.145 [2024-12-13 05:45:05.361614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.145 [2024-12-13 05:45:05.361622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.145 [2024-12-13 05:45:05.361628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.145 [2024-12-13 05:45:05.361633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.145 [2024-12-13 05:45:05.362895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.145 [2024-12-13 05:45:05.362980] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:06.145 [2024-12-13 05:45:05.362979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.145 [2024-12-13 05:45:05.502364] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.145 Malloc0 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.145 [2024-12-13 05:45:05.561877] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.145 [2024-12-13 05:45:05.573822] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.145 Malloc1 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.145 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=438367 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 438367 /var/tmp/bdevperf.sock 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 438367 ']' 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:06.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.146 NVMe0n1 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.146 1 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.146 05:45:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.146 request: 00:29:06.146 { 00:29:06.146 "name": "NVMe0", 00:29:06.146 "trtype": "tcp", 00:29:06.146 "traddr": "10.0.0.2", 00:29:06.146 "adrfam": "ipv4", 00:29:06.146 "trsvcid": "4420", 00:29:06.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.146 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:06.146 "hostaddr": "10.0.0.1", 00:29:06.146 "prchk_reftag": false, 00:29:06.146 "prchk_guard": false, 00:29:06.146 "hdgst": false, 00:29:06.146 "ddgst": false, 00:29:06.146 "allow_unrecognized_csi": false, 00:29:06.146 "method": "bdev_nvme_attach_controller", 00:29:06.146 "req_id": 1 00:29:06.146 } 00:29:06.146 Got JSON-RPC error response 00:29:06.146 response: 00:29:06.146 { 00:29:06.146 "code": -114, 00:29:06.146 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:06.146 } 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.146 request: 00:29:06.146 { 00:29:06.146 "name": "NVMe0", 00:29:06.146 "trtype": "tcp", 00:29:06.146 "traddr": "10.0.0.2", 00:29:06.146 "adrfam": "ipv4", 00:29:06.146 "trsvcid": "4420", 00:29:06.146 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:06.146 "hostaddr": "10.0.0.1", 00:29:06.146 "prchk_reftag": false, 00:29:06.146 "prchk_guard": false, 00:29:06.146 "hdgst": false, 00:29:06.146 "ddgst": false, 00:29:06.146 "allow_unrecognized_csi": false, 00:29:06.146 "method": "bdev_nvme_attach_controller", 00:29:06.146 "req_id": 1 00:29:06.146 } 00:29:06.146 Got JSON-RPC error response 00:29:06.146 response: 00:29:06.146 { 00:29:06.146 "code": -114, 00:29:06.146 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:06.146 } 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.146 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.146 request: 00:29:06.146 { 00:29:06.146 "name": "NVMe0", 00:29:06.146 "trtype": "tcp", 00:29:06.146 "traddr": "10.0.0.2", 00:29:06.146 "adrfam": "ipv4", 00:29:06.146 "trsvcid": "4420", 00:29:06.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.146 "hostaddr": "10.0.0.1", 00:29:06.146 "prchk_reftag": false, 00:29:06.146 "prchk_guard": false, 00:29:06.146 "hdgst": false, 00:29:06.146 "ddgst": false, 00:29:06.146 "multipath": "disable", 00:29:06.146 "allow_unrecognized_csi": false, 00:29:06.146 "method": "bdev_nvme_attach_controller", 00:29:06.146 "req_id": 1 00:29:06.146 } 00:29:06.146 Got JSON-RPC error response 00:29:06.146 response: 00:29:06.146 { 00:29:06.146 "code": -114, 00:29:06.147 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:06.147 } 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.147 request: 00:29:06.147 { 00:29:06.147 "name": "NVMe0", 00:29:06.147 "trtype": "tcp", 00:29:06.147 "traddr": "10.0.0.2", 00:29:06.147 "adrfam": "ipv4", 00:29:06.147 "trsvcid": "4420", 00:29:06.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.147 "hostaddr": "10.0.0.1", 00:29:06.147 "prchk_reftag": false, 00:29:06.147 "prchk_guard": false, 00:29:06.147 "hdgst": false, 00:29:06.147 "ddgst": false, 00:29:06.147 "multipath": "failover", 00:29:06.147 "allow_unrecognized_csi": false, 00:29:06.147 "method": "bdev_nvme_attach_controller", 00:29:06.147 "req_id": 1 00:29:06.147 } 00:29:06.147 Got JSON-RPC error response 00:29:06.147 response: 00:29:06.147 { 00:29:06.147 "code": -114, 00:29:06.147 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:06.147 } 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.147 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.405 NVMe0n1 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.405 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:06.405 05:45:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:07.780 { 00:29:07.780 "results": [ 00:29:07.780 { 00:29:07.781 "job": "NVMe0n1", 00:29:07.781 "core_mask": "0x1", 00:29:07.781 "workload": "write", 00:29:07.781 "status": "finished", 00:29:07.781 "queue_depth": 128, 00:29:07.781 "io_size": 4096, 00:29:07.781 "runtime": 1.003054, 00:29:07.781 "iops": 25204.026901841775, 00:29:07.781 "mibps": 98.45323008531943, 00:29:07.781 "io_failed": 0, 00:29:07.781 "io_timeout": 0, 00:29:07.781 "avg_latency_us": 5071.93709109608, 00:29:07.781 "min_latency_us": 1466.7580952380952, 00:29:07.781 "max_latency_us": 8738.133333333333 00:29:07.781 } 00:29:07.781 ], 00:29:07.781 "core_count": 1 00:29:07.781 } 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 438367 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 438367 ']' 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 438367 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 438367 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 438367' 00:29:07.781 killing process with pid 438367 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 438367 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 438367 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:07.781 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:07.781 [2024-12-13 05:45:05.677959] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:07.781 [2024-12-13 05:45:05.678003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438367 ] 00:29:07.781 [2024-12-13 05:45:05.752348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.781 [2024-12-13 05:45:05.774595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.781 [2024-12-13 05:45:06.309247] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name cbb270ef-edbb-466c-b059-bfd31cb7251a already exists 00:29:07.781 [2024-12-13 05:45:06.309272] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:cbb270ef-edbb-466c-b059-bfd31cb7251a alias for bdev NVMe1n1 00:29:07.781 [2024-12-13 05:45:06.309280] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:07.781 Running I/O for 1 seconds... 00:29:07.781 25153.00 IOPS, 98.25 MiB/s 00:29:07.781 Latency(us) 00:29:07.781 [2024-12-13T04:45:07.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.781 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:07.781 NVMe0n1 : 1.00 25204.03 98.45 0.00 0.00 5071.94 1466.76 8738.13 00:29:07.781 [2024-12-13T04:45:07.796Z] =================================================================================================================== 00:29:07.781 [2024-12-13T04:45:07.796Z] Total : 25204.03 98.45 0.00 0.00 5071.94 1466.76 8738.13 00:29:07.781 Received shutdown signal, test time was about 1.000000 seconds 00:29:07.781 00:29:07.781 Latency(us) 00:29:07.781 [2024-12-13T04:45:07.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.781 [2024-12-13T04:45:07.796Z] =================================================================================================================== 00:29:07.781 [2024-12-13T04:45:07.796Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.781 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:07.781 rmmod nvme_tcp 00:29:07.781 rmmod nvme_fabrics 00:29:07.781 rmmod nvme_keyring 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 438273 ']' 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 438273 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 438273 ']' 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 438273 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.781 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 438273 00:29:08.040 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:08.040 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:08.040 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 438273' 00:29:08.040 killing process with pid 438273 00:29:08.040 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 438273 00:29:08.040 05:45:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 438273 00:29:08.040 05:45:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:08.040 05:45:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:08.040 05:45:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:08.040 05:45:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:08.040 05:45:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:08.040 05:45:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:08.040 05:45:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:08.040 05:45:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:08.040 05:45:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:08.040 05:45:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.040 05:45:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.040 05:45:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:10.576 00:29:10.576 real 0m10.939s 00:29:10.576 user 0m11.812s 00:29:10.576 sys 0m5.154s 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:10.576 ************************************ 00:29:10.576 END TEST nvmf_multicontroller 00:29:10.576 ************************************ 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.576 ************************************ 00:29:10.576 START TEST nvmf_aer 00:29:10.576 ************************************ 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:10.576 * Looking for test storage... 00:29:10.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:10.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.576 --rc genhtml_branch_coverage=1 00:29:10.576 --rc genhtml_function_coverage=1 00:29:10.576 --rc genhtml_legend=1 00:29:10.576 --rc geninfo_all_blocks=1 00:29:10.576 --rc geninfo_unexecuted_blocks=1 00:29:10.576 00:29:10.576 ' 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:10.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.576 --rc genhtml_branch_coverage=1 00:29:10.576 --rc genhtml_function_coverage=1 00:29:10.576 --rc genhtml_legend=1 00:29:10.576 --rc geninfo_all_blocks=1 00:29:10.576 --rc geninfo_unexecuted_blocks=1 00:29:10.576 00:29:10.576 ' 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:10.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.576 --rc genhtml_branch_coverage=1 00:29:10.576 --rc genhtml_function_coverage=1 00:29:10.576 --rc genhtml_legend=1 00:29:10.576 --rc geninfo_all_blocks=1 00:29:10.576 --rc geninfo_unexecuted_blocks=1 00:29:10.576 00:29:10.576 ' 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:10.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.576 --rc genhtml_branch_coverage=1 00:29:10.576 --rc genhtml_function_coverage=1 00:29:10.576 --rc genhtml_legend=1 00:29:10.576 --rc geninfo_all_blocks=1 00:29:10.576 --rc geninfo_unexecuted_blocks=1 00:29:10.576 00:29:10.576 ' 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.576 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:10.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:10.577 05:45:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:17.149 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:17.149 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:17.149 Found net devices under 0000:af:00.0: cvl_0_0 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:17.149 Found net devices under 0000:af:00.1: cvl_0_1 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.149 05:45:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.149 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.149 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.149 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:17.149 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.149 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.149 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.149 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:17.149 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:17.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:29:17.149 00:29:17.149 --- 10.0.0.2 ping statistics --- 00:29:17.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.149 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:29:17.149 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:29:17.149 00:29:17.149 --- 10.0.0.1 ping statistics --- 00:29:17.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.149 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:29:17.149 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.149 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:17.149 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:17.149 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=442604 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 442604 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 442604 ']' 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.150 [2024-12-13 05:45:16.298916] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:17.150 [2024-12-13 05:45:16.298965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.150 [2024-12-13 05:45:16.368718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.150 [2024-12-13 05:45:16.393430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.150 [2024-12-13 05:45:16.393474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.150 [2024-12-13 05:45:16.393482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.150 [2024-12-13 05:45:16.393488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.150 [2024-12-13 05:45:16.393493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.150 [2024-12-13 05:45:16.394968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.150 [2024-12-13 05:45:16.395076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.150 [2024-12-13 05:45:16.395182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.150 [2024-12-13 05:45:16.395184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.150 [2024-12-13 05:45:16.535580] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.150 Malloc0 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.150 [2024-12-13 05:45:16.607583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.150 [ 00:29:17.150 { 00:29:17.150 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:17.150 "subtype": "Discovery", 00:29:17.150 "listen_addresses": [], 00:29:17.150 "allow_any_host": true, 00:29:17.150 "hosts": [] 00:29:17.150 }, 00:29:17.150 { 00:29:17.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:17.150 "subtype": "NVMe", 00:29:17.150 "listen_addresses": [ 00:29:17.150 { 00:29:17.150 "trtype": "TCP", 00:29:17.150 "adrfam": "IPv4", 00:29:17.150 "traddr": "10.0.0.2", 00:29:17.150 "trsvcid": "4420" 00:29:17.150 } 00:29:17.150 ], 00:29:17.150 "allow_any_host": true, 00:29:17.150 "hosts": [], 00:29:17.150 "serial_number": "SPDK00000000000001", 00:29:17.150 "model_number": "SPDK bdev Controller", 00:29:17.150 "max_namespaces": 2, 00:29:17.150 "min_cntlid": 1, 00:29:17.150 "max_cntlid": 65519, 00:29:17.150 "namespaces": [ 00:29:17.150 { 00:29:17.150 "nsid": 1, 00:29:17.150 "bdev_name": "Malloc0", 00:29:17.150 "name": "Malloc0", 00:29:17.150 "nguid": "76B697B9E9CB4DB099480E553768A6FC", 00:29:17.150 "uuid": "76b697b9-e9cb-4db0-9948-0e553768a6fc" 00:29:17.150 } 00:29:17.150 ] 00:29:17.150 } 00:29:17.150 ] 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=442632 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.150 Malloc1 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.150 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.150 Asynchronous Event Request test 00:29:17.150 Attaching to 10.0.0.2 00:29:17.150 Attached to 10.0.0.2 00:29:17.150 Registering asynchronous event callbacks... 00:29:17.150 Starting namespace attribute notice tests for all controllers... 00:29:17.150 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:17.150 aer_cb - Changed Namespace 00:29:17.150 Cleaning up... 00:29:17.150 [ 00:29:17.150 { 00:29:17.150 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:17.150 "subtype": "Discovery", 00:29:17.150 "listen_addresses": [], 00:29:17.150 "allow_any_host": true, 00:29:17.150 "hosts": [] 00:29:17.150 }, 00:29:17.150 { 00:29:17.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:17.150 "subtype": "NVMe", 00:29:17.150 "listen_addresses": [ 00:29:17.150 { 00:29:17.150 "trtype": "TCP", 00:29:17.150 "adrfam": "IPv4", 00:29:17.150 "traddr": "10.0.0.2", 00:29:17.150 "trsvcid": "4420" 00:29:17.150 } 00:29:17.150 ], 00:29:17.150 "allow_any_host": true, 00:29:17.150 "hosts": [], 00:29:17.150 "serial_number": "SPDK00000000000001", 00:29:17.150 "model_number": "SPDK bdev Controller", 00:29:17.150 "max_namespaces": 2, 00:29:17.150 "min_cntlid": 1, 00:29:17.150 "max_cntlid": 65519, 00:29:17.150 "namespaces": [ 00:29:17.150 { 00:29:17.150 "nsid": 1, 00:29:17.150 "bdev_name": "Malloc0", 00:29:17.150 "name": "Malloc0", 00:29:17.150 "nguid": "76B697B9E9CB4DB099480E553768A6FC", 00:29:17.150 "uuid": "76b697b9-e9cb-4db0-9948-0e553768a6fc" 00:29:17.150 }, 00:29:17.150 { 00:29:17.150 "nsid": 2, 00:29:17.150 "bdev_name": "Malloc1", 00:29:17.150 "name": "Malloc1", 00:29:17.150 "nguid": "25C4DC3F904640DF82B5E283DDAE7752", 00:29:17.151 "uuid": "25c4dc3f-9046-40df-82b5-e283ddae7752" 00:29:17.151 } 00:29:17.151 ] 00:29:17.151 } 00:29:17.151 ] 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 442632 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:17.151 05:45:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:17.151 rmmod nvme_tcp 00:29:17.151 rmmod nvme_fabrics 00:29:17.151 rmmod nvme_keyring 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 442604 ']' 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 442604 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 442604 ']' 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 442604 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442604 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442604' 00:29:17.151 killing process with pid 442604 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 442604 00:29:17.151 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 442604 00:29:17.410 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:17.410 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:17.410 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:17.410 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:17.410 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:17.410 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:17.410 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:17.410 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:17.410 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:17.410 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.410 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.410 05:45:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.314 05:45:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:19.574 00:29:19.574 real 0m9.174s 00:29:19.574 user 0m5.043s 00:29:19.574 sys 0m4.878s 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.574 ************************************ 00:29:19.574 END TEST nvmf_aer 00:29:19.574 ************************************ 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.574 ************************************ 00:29:19.574 START TEST nvmf_async_init 00:29:19.574 ************************************ 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:19.574 * Looking for test storage... 00:29:19.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:19.574 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:19.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.834 --rc genhtml_branch_coverage=1 00:29:19.834 --rc genhtml_function_coverage=1 00:29:19.834 --rc genhtml_legend=1 00:29:19.834 --rc geninfo_all_blocks=1 00:29:19.834 --rc geninfo_unexecuted_blocks=1 00:29:19.834 00:29:19.834 ' 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:19.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.834 --rc genhtml_branch_coverage=1 00:29:19.834 --rc genhtml_function_coverage=1 00:29:19.834 --rc genhtml_legend=1 00:29:19.834 --rc geninfo_all_blocks=1 00:29:19.834 --rc geninfo_unexecuted_blocks=1 00:29:19.834 00:29:19.834 ' 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:19.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.834 --rc genhtml_branch_coverage=1 00:29:19.834 --rc genhtml_function_coverage=1 00:29:19.834 --rc genhtml_legend=1 00:29:19.834 --rc geninfo_all_blocks=1 00:29:19.834 --rc geninfo_unexecuted_blocks=1 00:29:19.834 00:29:19.834 ' 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:19.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.834 --rc genhtml_branch_coverage=1 00:29:19.834 --rc genhtml_function_coverage=1 00:29:19.834 --rc genhtml_legend=1 00:29:19.834 --rc geninfo_all_blocks=1 00:29:19.834 --rc geninfo_unexecuted_blocks=1 00:29:19.834 00:29:19.834 ' 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:19.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=59a0b12b1aa241fcbe3fbcc95cebc140 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:19.834 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.835 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.835 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.835 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:19.835 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:19.835 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:19.835 05:45:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:26.408 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:26.408 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.408 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:26.409 Found net devices under 0000:af:00.0: cvl_0_0 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:26.409 Found net devices under 0000:af:00.1: cvl_0_1 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:29:26.409 00:29:26.409 --- 10.0.0.2 ping statistics --- 00:29:26.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.409 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:29:26.409 00:29:26.409 --- 10.0.0.1 ping statistics --- 00:29:26.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.409 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=446148 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 446148 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 446148 ']' 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.409 [2024-12-13 05:45:25.665907] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:26.409 [2024-12-13 05:45:25.665954] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.409 [2024-12-13 05:45:25.743350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.409 [2024-12-13 05:45:25.765727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.409 [2024-12-13 05:45:25.765763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.409 [2024-12-13 05:45:25.765770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.409 [2024-12-13 05:45:25.765776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.409 [2024-12-13 05:45:25.765781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.409 [2024-12-13 05:45:25.766284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.409 [2024-12-13 05:45:25.898138] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.409 null0 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 59a0b12b1aa241fcbe3fbcc95cebc140 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.409 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.410 [2024-12-13 05:45:25.946385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.410 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.410 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:26.410 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.410 05:45:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.410 nvme0n1 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.410 [ 00:29:26.410 { 00:29:26.410 "name": "nvme0n1", 00:29:26.410 "aliases": [ 00:29:26.410 "59a0b12b-1aa2-41fc-be3f-bcc95cebc140" 00:29:26.410 ], 00:29:26.410 "product_name": "NVMe disk", 00:29:26.410 "block_size": 512, 00:29:26.410 "num_blocks": 2097152, 00:29:26.410 "uuid": "59a0b12b-1aa2-41fc-be3f-bcc95cebc140", 00:29:26.410 "numa_id": 1, 00:29:26.410 "assigned_rate_limits": { 00:29:26.410 "rw_ios_per_sec": 0, 00:29:26.410 "rw_mbytes_per_sec": 0, 00:29:26.410 "r_mbytes_per_sec": 0, 00:29:26.410 "w_mbytes_per_sec": 0 00:29:26.410 }, 00:29:26.410 "claimed": false, 00:29:26.410 "zoned": false, 00:29:26.410 "supported_io_types": { 00:29:26.410 "read": true, 00:29:26.410 "write": true, 00:29:26.410 "unmap": false, 00:29:26.410 "flush": true, 00:29:26.410 "reset": true, 00:29:26.410 "nvme_admin": true, 00:29:26.410 "nvme_io": true, 00:29:26.410 "nvme_io_md": false, 00:29:26.410 "write_zeroes": true, 00:29:26.410 "zcopy": false, 00:29:26.410 "get_zone_info": false, 00:29:26.410 "zone_management": false, 00:29:26.410 "zone_append": false, 00:29:26.410 "compare": true, 00:29:26.410 "compare_and_write": true, 00:29:26.410 "abort": true, 00:29:26.410 "seek_hole": false, 00:29:26.410 "seek_data": false, 00:29:26.410 "copy": true, 00:29:26.410 "nvme_iov_md": false 00:29:26.410 }, 00:29:26.410 "memory_domains": [ 00:29:26.410 { 00:29:26.410 "dma_device_id": "system", 00:29:26.410 "dma_device_type": 1 00:29:26.410 } 00:29:26.410 ], 00:29:26.410 "driver_specific": { 00:29:26.410 "nvme": [ 00:29:26.410 { 00:29:26.410 "trid": { 00:29:26.410 "trtype": "TCP", 00:29:26.410 "adrfam": "IPv4", 00:29:26.410 "traddr": "10.0.0.2", 00:29:26.410 "trsvcid": "4420", 00:29:26.410 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:26.410 }, 00:29:26.410 "ctrlr_data": { 00:29:26.410 "cntlid": 1, 00:29:26.410 "vendor_id": "0x8086", 00:29:26.410 "model_number": "SPDK bdev Controller", 00:29:26.410 "serial_number": "00000000000000000000", 00:29:26.410 "firmware_revision": "25.01", 00:29:26.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.410 "oacs": { 00:29:26.410 "security": 0, 00:29:26.410 "format": 0, 00:29:26.410 "firmware": 0, 00:29:26.410 "ns_manage": 0 00:29:26.410 }, 00:29:26.410 "multi_ctrlr": true, 00:29:26.410 "ana_reporting": false 00:29:26.410 }, 00:29:26.410 "vs": { 00:29:26.410 "nvme_version": "1.3" 00:29:26.410 }, 00:29:26.410 "ns_data": { 00:29:26.410 "id": 1, 00:29:26.410 "can_share": true 00:29:26.410 } 00:29:26.410 } 00:29:26.410 ], 00:29:26.410 "mp_policy": "active_passive" 00:29:26.410 } 00:29:26.410 } 00:29:26.410 ] 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.410 [2024-12-13 05:45:26.206946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:26.410 [2024-12-13 05:45:26.207001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a4230 (9): Bad file descriptor 00:29:26.410 [2024-12-13 05:45:26.340528] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.410 [ 00:29:26.410 { 00:29:26.410 "name": "nvme0n1", 00:29:26.410 "aliases": [ 00:29:26.410 "59a0b12b-1aa2-41fc-be3f-bcc95cebc140" 00:29:26.410 ], 00:29:26.410 "product_name": "NVMe disk", 00:29:26.410 "block_size": 512, 00:29:26.410 "num_blocks": 2097152, 00:29:26.410 "uuid": "59a0b12b-1aa2-41fc-be3f-bcc95cebc140", 00:29:26.410 "numa_id": 1, 00:29:26.410 "assigned_rate_limits": { 00:29:26.410 "rw_ios_per_sec": 0, 00:29:26.410 "rw_mbytes_per_sec": 0, 00:29:26.410 "r_mbytes_per_sec": 0, 00:29:26.410 "w_mbytes_per_sec": 0 00:29:26.410 }, 00:29:26.410 "claimed": false, 00:29:26.410 "zoned": false, 00:29:26.410 "supported_io_types": { 00:29:26.410 "read": true, 00:29:26.410 "write": true, 00:29:26.410 "unmap": false, 00:29:26.410 "flush": true, 00:29:26.410 "reset": true, 00:29:26.410 "nvme_admin": true, 00:29:26.410 "nvme_io": true, 00:29:26.410 "nvme_io_md": false, 00:29:26.410 "write_zeroes": true, 00:29:26.410 "zcopy": false, 00:29:26.410 "get_zone_info": false, 00:29:26.410 "zone_management": false, 00:29:26.410 "zone_append": false, 00:29:26.410 "compare": true, 00:29:26.410 "compare_and_write": true, 00:29:26.410 "abort": true, 00:29:26.410 "seek_hole": false, 00:29:26.410 "seek_data": false, 00:29:26.410 "copy": true, 00:29:26.410 "nvme_iov_md": false 00:29:26.410 }, 00:29:26.410 "memory_domains": [ 00:29:26.410 { 00:29:26.410 "dma_device_id": "system", 00:29:26.410 "dma_device_type": 1 00:29:26.410 } 00:29:26.410 ], 00:29:26.410 "driver_specific": { 00:29:26.410 "nvme": [ 00:29:26.410 { 00:29:26.410 "trid": { 00:29:26.410 "trtype": "TCP", 00:29:26.410 "adrfam": "IPv4", 00:29:26.410 "traddr": "10.0.0.2", 00:29:26.410 "trsvcid": "4420", 00:29:26.410 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:26.410 }, 00:29:26.410 "ctrlr_data": { 00:29:26.410 "cntlid": 2, 00:29:26.410 "vendor_id": "0x8086", 00:29:26.410 "model_number": "SPDK bdev Controller", 00:29:26.410 "serial_number": "00000000000000000000", 00:29:26.410 "firmware_revision": "25.01", 00:29:26.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.410 "oacs": { 00:29:26.410 "security": 0, 00:29:26.410 "format": 0, 00:29:26.410 "firmware": 0, 00:29:26.410 "ns_manage": 0 00:29:26.410 }, 00:29:26.410 "multi_ctrlr": true, 00:29:26.410 "ana_reporting": false 00:29:26.410 }, 00:29:26.410 "vs": { 00:29:26.410 "nvme_version": "1.3" 00:29:26.410 }, 00:29:26.410 "ns_data": { 00:29:26.410 "id": 1, 00:29:26.410 "can_share": true 00:29:26.410 } 00:29:26.410 } 00:29:26.410 ], 00:29:26.410 "mp_policy": "active_passive" 00:29:26.410 } 00:29:26.410 } 00:29:26.410 ] 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.LTRvMJspDu 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.LTRvMJspDu 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.LTRvMJspDu 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.410 [2024-12-13 05:45:26.415566] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:26.410 [2024-12-13 05:45:26.415680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.410 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.670 [2024-12-13 05:45:26.435629] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:26.670 nvme0n1 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.670 [ 00:29:26.670 { 00:29:26.670 "name": "nvme0n1", 00:29:26.670 "aliases": [ 00:29:26.670 "59a0b12b-1aa2-41fc-be3f-bcc95cebc140" 00:29:26.670 ], 00:29:26.670 "product_name": "NVMe disk", 00:29:26.670 "block_size": 512, 00:29:26.670 "num_blocks": 2097152, 00:29:26.670 "uuid": "59a0b12b-1aa2-41fc-be3f-bcc95cebc140", 00:29:26.670 "numa_id": 1, 00:29:26.670 "assigned_rate_limits": { 00:29:26.670 "rw_ios_per_sec": 0, 00:29:26.670 "rw_mbytes_per_sec": 0, 00:29:26.670 "r_mbytes_per_sec": 0, 00:29:26.670 "w_mbytes_per_sec": 0 00:29:26.670 }, 00:29:26.670 "claimed": false, 00:29:26.670 "zoned": false, 00:29:26.670 "supported_io_types": { 00:29:26.670 "read": true, 00:29:26.670 "write": true, 00:29:26.670 "unmap": false, 00:29:26.670 "flush": true, 00:29:26.670 "reset": true, 00:29:26.670 "nvme_admin": true, 00:29:26.670 "nvme_io": true, 00:29:26.670 "nvme_io_md": false, 00:29:26.670 "write_zeroes": true, 00:29:26.670 "zcopy": false, 00:29:26.670 "get_zone_info": false, 00:29:26.670 "zone_management": false, 00:29:26.670 "zone_append": false, 00:29:26.670 "compare": true, 00:29:26.670 "compare_and_write": true, 00:29:26.670 "abort": true, 00:29:26.670 "seek_hole": false, 00:29:26.670 "seek_data": false, 00:29:26.670 "copy": true, 00:29:26.670 "nvme_iov_md": false 00:29:26.670 }, 00:29:26.670 "memory_domains": [ 00:29:26.670 { 00:29:26.670 "dma_device_id": "system", 00:29:26.670 "dma_device_type": 1 00:29:26.670 } 00:29:26.670 ], 00:29:26.670 "driver_specific": { 00:29:26.670 "nvme": [ 00:29:26.670 { 00:29:26.670 "trid": { 00:29:26.670 "trtype": "TCP", 00:29:26.670 "adrfam": "IPv4", 00:29:26.670 "traddr": "10.0.0.2", 00:29:26.670 "trsvcid": "4421", 00:29:26.670 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:26.670 }, 00:29:26.670 "ctrlr_data": { 00:29:26.670 "cntlid": 3, 00:29:26.670 "vendor_id": "0x8086", 00:29:26.670 "model_number": "SPDK bdev Controller", 00:29:26.670 "serial_number": "00000000000000000000", 00:29:26.670 "firmware_revision": "25.01", 00:29:26.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.670 "oacs": { 00:29:26.670 "security": 0, 00:29:26.670 "format": 0, 00:29:26.670 "firmware": 0, 00:29:26.670 "ns_manage": 0 00:29:26.670 }, 00:29:26.670 "multi_ctrlr": true, 00:29:26.670 "ana_reporting": false 00:29:26.670 }, 00:29:26.670 "vs": { 00:29:26.670 "nvme_version": "1.3" 00:29:26.670 }, 00:29:26.670 "ns_data": { 00:29:26.670 "id": 1, 00:29:26.670 "can_share": true 00:29:26.670 } 00:29:26.670 } 00:29:26.670 ], 00:29:26.670 "mp_policy": "active_passive" 00:29:26.670 } 00:29:26.670 } 00:29:26.670 ] 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.LTRvMJspDu 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.670 rmmod nvme_tcp 00:29:26.670 rmmod nvme_fabrics 00:29:26.670 rmmod nvme_keyring 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 446148 ']' 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 446148 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 446148 ']' 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 446148 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 446148 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 446148' 00:29:26.670 killing process with pid 446148 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 446148 00:29:26.670 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 446148 00:29:26.930 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:26.930 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:26.930 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:26.930 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:26.930 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:26.930 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:26.930 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:26.930 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.930 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.930 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.930 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.930 05:45:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.467 05:45:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.467 00:29:29.467 real 0m9.451s 00:29:29.467 user 0m3.004s 00:29:29.467 sys 0m4.778s 00:29:29.467 05:45:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.467 05:45:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.467 ************************************ 00:29:29.467 END TEST nvmf_async_init 00:29:29.467 ************************************ 00:29:29.467 05:45:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:29.467 05:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:29.467 05:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.467 05:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.467 ************************************ 00:29:29.467 START TEST dma 00:29:29.467 ************************************ 00:29:29.467 05:45:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:29.467 * Looking for test storage... 00:29:29.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:29.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.467 --rc genhtml_branch_coverage=1 00:29:29.467 --rc genhtml_function_coverage=1 00:29:29.467 --rc genhtml_legend=1 00:29:29.467 --rc geninfo_all_blocks=1 00:29:29.467 --rc geninfo_unexecuted_blocks=1 00:29:29.467 00:29:29.467 ' 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:29.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.467 --rc genhtml_branch_coverage=1 00:29:29.467 --rc genhtml_function_coverage=1 00:29:29.467 --rc genhtml_legend=1 00:29:29.467 --rc geninfo_all_blocks=1 00:29:29.467 --rc geninfo_unexecuted_blocks=1 00:29:29.467 00:29:29.467 ' 00:29:29.467 05:45:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:29.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.467 --rc genhtml_branch_coverage=1 00:29:29.468 --rc genhtml_function_coverage=1 00:29:29.468 --rc genhtml_legend=1 00:29:29.468 --rc geninfo_all_blocks=1 00:29:29.468 --rc geninfo_unexecuted_blocks=1 00:29:29.468 00:29:29.468 ' 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:29.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.468 --rc genhtml_branch_coverage=1 00:29:29.468 --rc genhtml_function_coverage=1 00:29:29.468 --rc genhtml_legend=1 00:29:29.468 --rc geninfo_all_blocks=1 00:29:29.468 --rc geninfo_unexecuted_blocks=1 00:29:29.468 00:29:29.468 ' 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:29.468 00:29:29.468 real 0m0.207s 00:29:29.468 user 0m0.125s 00:29:29.468 sys 0m0.096s 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:29.468 ************************************ 00:29:29.468 END TEST dma 00:29:29.468 ************************************ 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.468 ************************************ 00:29:29.468 START TEST nvmf_identify 00:29:29.468 ************************************ 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:29.468 * Looking for test storage... 00:29:29.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:29.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.468 --rc genhtml_branch_coverage=1 00:29:29.468 --rc genhtml_function_coverage=1 00:29:29.468 --rc genhtml_legend=1 00:29:29.468 --rc geninfo_all_blocks=1 00:29:29.468 --rc geninfo_unexecuted_blocks=1 00:29:29.468 00:29:29.468 ' 00:29:29.468 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:29.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.468 --rc genhtml_branch_coverage=1 00:29:29.468 --rc genhtml_function_coverage=1 00:29:29.468 --rc genhtml_legend=1 00:29:29.468 --rc geninfo_all_blocks=1 00:29:29.468 --rc geninfo_unexecuted_blocks=1 00:29:29.469 00:29:29.469 ' 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:29.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.469 --rc genhtml_branch_coverage=1 00:29:29.469 --rc genhtml_function_coverage=1 00:29:29.469 --rc genhtml_legend=1 00:29:29.469 --rc geninfo_all_blocks=1 00:29:29.469 --rc geninfo_unexecuted_blocks=1 00:29:29.469 00:29:29.469 ' 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:29.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.469 --rc genhtml_branch_coverage=1 00:29:29.469 --rc genhtml_function_coverage=1 00:29:29.469 --rc genhtml_legend=1 00:29:29.469 --rc geninfo_all_blocks=1 00:29:29.469 --rc geninfo_unexecuted_blocks=1 00:29:29.469 00:29:29.469 ' 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.469 05:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.045 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:36.046 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:36.046 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:36.046 Found net devices under 0000:af:00.0: cvl_0_0 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:36.046 Found net devices under 0000:af:00.1: cvl_0_1 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:36.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:29:36.046 00:29:36.046 --- 10.0.0.2 ping statistics --- 00:29:36.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.046 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:29:36.046 00:29:36.046 --- 10.0.0.1 ping statistics --- 00:29:36.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.046 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=449858 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 449858 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 449858 ']' 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.046 [2024-12-13 05:45:35.395865] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:36.046 [2024-12-13 05:45:35.395906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.046 [2024-12-13 05:45:35.474460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:36.046 [2024-12-13 05:45:35.498626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.046 [2024-12-13 05:45:35.498663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.046 [2024-12-13 05:45:35.498670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.046 [2024-12-13 05:45:35.498676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.046 [2024-12-13 05:45:35.498681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.046 [2024-12-13 05:45:35.500141] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.046 [2024-12-13 05:45:35.500254] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:36.046 [2024-12-13 05:45:35.500358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.046 [2024-12-13 05:45:35.500360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.046 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.047 [2024-12-13 05:45:35.597306] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.047 Malloc0 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.047 [2024-12-13 05:45:35.699391] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.047 [ 00:29:36.047 { 00:29:36.047 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:36.047 "subtype": "Discovery", 00:29:36.047 "listen_addresses": [ 00:29:36.047 { 00:29:36.047 "trtype": "TCP", 00:29:36.047 "adrfam": "IPv4", 00:29:36.047 "traddr": "10.0.0.2", 00:29:36.047 "trsvcid": "4420" 00:29:36.047 } 00:29:36.047 ], 00:29:36.047 "allow_any_host": true, 00:29:36.047 "hosts": [] 00:29:36.047 }, 00:29:36.047 { 00:29:36.047 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.047 "subtype": "NVMe", 00:29:36.047 "listen_addresses": [ 00:29:36.047 { 00:29:36.047 "trtype": "TCP", 00:29:36.047 "adrfam": "IPv4", 00:29:36.047 "traddr": "10.0.0.2", 00:29:36.047 "trsvcid": "4420" 00:29:36.047 } 00:29:36.047 ], 00:29:36.047 "allow_any_host": true, 00:29:36.047 "hosts": [], 00:29:36.047 "serial_number": "SPDK00000000000001", 00:29:36.047 "model_number": "SPDK bdev Controller", 00:29:36.047 "max_namespaces": 32, 00:29:36.047 "min_cntlid": 1, 00:29:36.047 "max_cntlid": 65519, 00:29:36.047 "namespaces": [ 00:29:36.047 { 00:29:36.047 "nsid": 1, 00:29:36.047 "bdev_name": "Malloc0", 00:29:36.047 "name": "Malloc0", 00:29:36.047 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:36.047 "eui64": "ABCDEF0123456789", 00:29:36.047 "uuid": "13bf53aa-efa5-408a-8b1a-e830200b06d2" 00:29:36.047 } 00:29:36.047 ] 00:29:36.047 } 00:29:36.047 ] 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.047 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:36.047 [2024-12-13 05:45:35.753174] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:36.047 [2024-12-13 05:45:35.753208] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449971 ] 00:29:36.047 [2024-12-13 05:45:35.790819] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:36.047 [2024-12-13 05:45:35.790861] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:36.047 [2024-12-13 05:45:35.790865] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:36.047 [2024-12-13 05:45:35.790879] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:36.047 [2024-12-13 05:45:35.790889] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:36.047 [2024-12-13 05:45:35.794668] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:36.047 [2024-12-13 05:45:35.794704] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d4ede0 0 00:29:36.047 [2024-12-13 05:45:35.794889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:36.047 [2024-12-13 05:45:35.794897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:36.047 [2024-12-13 05:45:35.794901] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:36.047 [2024-12-13 05:45:35.794903] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:36.047 [2024-12-13 05:45:35.794925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.047 [2024-12-13 05:45:35.794930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.047 [2024-12-13 05:45:35.794934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4ede0) 00:29:36.047 [2024-12-13 05:45:35.794945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:36.047 [2024-12-13 05:45:35.794957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1da9f40, cid 0, qid 0 00:29:36.047 [2024-12-13 05:45:35.802463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.047 [2024-12-13 05:45:35.802472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.047 [2024-12-13 05:45:35.802475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.047 [2024-12-13 05:45:35.802480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1da9f40) on tqpair=0x1d4ede0 00:29:36.047 [2024-12-13 05:45:35.802491] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:36.047 [2024-12-13 05:45:35.802497] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:36.047 [2024-12-13 05:45:35.802502] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:36.047 [2024-12-13 05:45:35.802513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.047 [2024-12-13 05:45:35.802516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.047 [2024-12-13 05:45:35.802520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4ede0) 00:29:36.047 [2024-12-13 05:45:35.802527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.047 [2024-12-13 05:45:35.802539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1da9f40, cid 0, qid 0 00:29:36.047 [2024-12-13 05:45:35.802697] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.047 [2024-12-13 05:45:35.802703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.047 [2024-12-13 05:45:35.802706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.047 [2024-12-13 05:45:35.802709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1da9f40) on tqpair=0x1d4ede0 00:29:36.047 [2024-12-13 05:45:35.802714] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:36.047 [2024-12-13 05:45:35.802721] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:36.047 [2024-12-13 05:45:35.802727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.047 [2024-12-13 05:45:35.802730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.047 [2024-12-13 05:45:35.802733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4ede0) 00:29:36.047 [2024-12-13 05:45:35.802739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.047 [2024-12-13 05:45:35.802752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1da9f40, cid 0, qid 0 00:29:36.047 [2024-12-13 05:45:35.802811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.047 [2024-12-13 05:45:35.802817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.047 [2024-12-13 05:45:35.802819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.047 [2024-12-13 05:45:35.802823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1da9f40) on tqpair=0x1d4ede0 00:29:36.047 [2024-12-13 05:45:35.802827] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:36.047 [2024-12-13 05:45:35.802834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:36.047 [2024-12-13 05:45:35.802840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.047 [2024-12-13 05:45:35.802843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.047 [2024-12-13 05:45:35.802846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4ede0) 00:29:36.047 [2024-12-13 05:45:35.802851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.047 [2024-12-13 05:45:35.802860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1da9f40, cid 0, qid 0 00:29:36.047 [2024-12-13 05:45:35.802921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.047 [2024-12-13 05:45:35.802927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.047 [2024-12-13 05:45:35.802929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.802933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1da9f40) on tqpair=0x1d4ede0 00:29:36.048 [2024-12-13 05:45:35.802937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:36.048 [2024-12-13 05:45:35.802945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.802948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.802952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4ede0) 00:29:36.048 [2024-12-13 05:45:35.802957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.048 [2024-12-13 05:45:35.802966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1da9f40, cid 0, qid 0 00:29:36.048 [2024-12-13 05:45:35.803029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.048 [2024-12-13 05:45:35.803035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.048 [2024-12-13 05:45:35.803038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1da9f40) on tqpair=0x1d4ede0 00:29:36.048 [2024-12-13 05:45:35.803045] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:36.048 [2024-12-13 05:45:35.803049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:36.048 [2024-12-13 05:45:35.803056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:36.048 [2024-12-13 05:45:35.803164] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:36.048 [2024-12-13 05:45:35.803168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:36.048 [2024-12-13 05:45:35.803175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4ede0) 00:29:36.048 [2024-12-13 05:45:35.803188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.048 [2024-12-13 05:45:35.803198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1da9f40, cid 0, qid 0 00:29:36.048 [2024-12-13 05:45:35.803260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.048 [2024-12-13 05:45:35.803265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.048 [2024-12-13 05:45:35.803268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1da9f40) on tqpair=0x1d4ede0 00:29:36.048 [2024-12-13 05:45:35.803275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:36.048 [2024-12-13 05:45:35.803283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4ede0) 00:29:36.048 [2024-12-13 05:45:35.803295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.048 [2024-12-13 05:45:35.803305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1da9f40, cid 0, qid 0 00:29:36.048 [2024-12-13 05:45:35.803380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.048 [2024-12-13 05:45:35.803386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.048 [2024-12-13 05:45:35.803389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1da9f40) on tqpair=0x1d4ede0 00:29:36.048 [2024-12-13 05:45:35.803396] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:36.048 [2024-12-13 05:45:35.803400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:36.048 [2024-12-13 05:45:35.803407] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:36.048 [2024-12-13 05:45:35.803415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:36.048 [2024-12-13 05:45:35.803422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4ede0) 00:29:36.048 [2024-12-13 05:45:35.803431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.048 [2024-12-13 05:45:35.803441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1da9f40, cid 0, qid 0 00:29:36.048 [2024-12-13 05:45:35.803536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.048 [2024-12-13 05:45:35.803543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.048 [2024-12-13 05:45:35.803546] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803550] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d4ede0): datao=0, datal=4096, cccid=0 00:29:36.048 [2024-12-13 05:45:35.803554] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1da9f40) on tqpair(0x1d4ede0): expected_datao=0, payload_size=4096 00:29:36.048 [2024-12-13 05:45:35.803558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803564] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803568] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.048 [2024-12-13 05:45:35.803592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.048 [2024-12-13 05:45:35.803595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1da9f40) on tqpair=0x1d4ede0 00:29:36.048 [2024-12-13 05:45:35.803605] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:36.048 [2024-12-13 05:45:35.803609] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:36.048 [2024-12-13 05:45:35.803613] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:36.048 [2024-12-13 05:45:35.803618] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:36.048 [2024-12-13 05:45:35.803621] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:36.048 [2024-12-13 05:45:35.803626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:36.048 [2024-12-13 05:45:35.803636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:36.048 [2024-12-13 05:45:35.803644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4ede0) 00:29:36.048 [2024-12-13 05:45:35.803657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:36.048 [2024-12-13 05:45:35.803668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1da9f40, cid 0, qid 0 00:29:36.048 [2024-12-13 05:45:35.803735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.048 [2024-12-13 05:45:35.803741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.048 [2024-12-13 05:45:35.803744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1da9f40) on tqpair=0x1d4ede0 00:29:36.048 [2024-12-13 05:45:35.803753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4ede0) 00:29:36.048 [2024-12-13 05:45:35.803764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.048 [2024-12-13 05:45:35.803769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d4ede0) 00:29:36.048 [2024-12-13 05:45:35.803780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.048 [2024-12-13 05:45:35.803785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d4ede0) 00:29:36.048 [2024-12-13 05:45:35.803796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.048 [2024-12-13 05:45:35.803801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.048 [2024-12-13 05:45:35.803815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.048 [2024-12-13 05:45:35.803820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:36.048 [2024-12-13 05:45:35.803830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:36.048 [2024-12-13 05:45:35.803836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.048 [2024-12-13 05:45:35.803839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d4ede0) 00:29:36.048 [2024-12-13 05:45:35.803845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.048 [2024-12-13 05:45:35.803855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1da9f40, cid 0, qid 0 00:29:36.048 [2024-12-13 05:45:35.803860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa0c0, cid 1, qid 0 00:29:36.048 [2024-12-13 05:45:35.803864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa240, cid 2, qid 0 00:29:36.048 [2024-12-13 05:45:35.803868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.048 [2024-12-13 05:45:35.803872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa540, cid 4, qid 0 00:29:36.048 [2024-12-13 05:45:35.803969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.048 [2024-12-13 05:45:35.803975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.049 [2024-12-13 05:45:35.803978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.803981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa540) on tqpair=0x1d4ede0 00:29:36.049 [2024-12-13 05:45:35.803985] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:36.049 [2024-12-13 05:45:35.803990] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:36.049 [2024-12-13 05:45:35.803999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.804002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d4ede0) 00:29:36.049 [2024-12-13 05:45:35.804008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.049 [2024-12-13 05:45:35.804017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa540, cid 4, qid 0 00:29:36.049 [2024-12-13 05:45:35.804088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.049 [2024-12-13 05:45:35.804094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.049 [2024-12-13 05:45:35.804097] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.804100] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d4ede0): datao=0, datal=4096, cccid=4 00:29:36.049 [2024-12-13 05:45:35.804104] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daa540) on tqpair(0x1d4ede0): expected_datao=0, payload_size=4096 00:29:36.049 [2024-12-13 05:45:35.804107] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.804119] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.804123] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.844611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.049 [2024-12-13 05:45:35.844622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.049 [2024-12-13 05:45:35.844629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.844633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa540) on tqpair=0x1d4ede0 00:29:36.049 [2024-12-13 05:45:35.844645] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:36.049 [2024-12-13 05:45:35.844667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.844672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d4ede0) 00:29:36.049 [2024-12-13 05:45:35.844679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.049 [2024-12-13 05:45:35.844685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.844689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.844692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d4ede0) 00:29:36.049 [2024-12-13 05:45:35.844697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.049 [2024-12-13 05:45:35.844712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa540, cid 4, qid 0 00:29:36.049 [2024-12-13 05:45:35.844717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa6c0, cid 5, qid 0 00:29:36.049 [2024-12-13 05:45:35.844821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.049 [2024-12-13 05:45:35.844827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.049 [2024-12-13 05:45:35.844830] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.844833] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d4ede0): datao=0, datal=1024, cccid=4 00:29:36.049 [2024-12-13 05:45:35.844837] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daa540) on tqpair(0x1d4ede0): expected_datao=0, payload_size=1024 00:29:36.049 [2024-12-13 05:45:35.844841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.844847] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.844850] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.844855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.049 [2024-12-13 05:45:35.844860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.049 [2024-12-13 05:45:35.844863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.844866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa6c0) on tqpair=0x1d4ede0 00:29:36.049 [2024-12-13 05:45:35.889461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.049 [2024-12-13 05:45:35.889472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.049 [2024-12-13 05:45:35.889475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.889478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa540) on tqpair=0x1d4ede0 00:29:36.049 [2024-12-13 05:45:35.889489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.889493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d4ede0) 00:29:36.049 [2024-12-13 05:45:35.889500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.049 [2024-12-13 05:45:35.889516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa540, cid 4, qid 0 00:29:36.049 [2024-12-13 05:45:35.889661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.049 [2024-12-13 05:45:35.889667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.049 [2024-12-13 05:45:35.889670] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.889673] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d4ede0): datao=0, datal=3072, cccid=4 00:29:36.049 [2024-12-13 05:45:35.889681] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daa540) on tqpair(0x1d4ede0): expected_datao=0, payload_size=3072 00:29:36.049 [2024-12-13 05:45:35.889685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.889699] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.889703] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.889750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.049 [2024-12-13 05:45:35.889755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.049 [2024-12-13 05:45:35.889758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.889762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa540) on tqpair=0x1d4ede0 00:29:36.049 [2024-12-13 05:45:35.889768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.889772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d4ede0) 00:29:36.049 [2024-12-13 05:45:35.889777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.049 [2024-12-13 05:45:35.889791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa540, cid 4, qid 0 00:29:36.049 [2024-12-13 05:45:35.889870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.049 [2024-12-13 05:45:35.889875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.049 [2024-12-13 05:45:35.889878] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.889881] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d4ede0): datao=0, datal=8, cccid=4 00:29:36.049 [2024-12-13 05:45:35.889884] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daa540) on tqpair(0x1d4ede0): expected_datao=0, payload_size=8 00:29:36.049 [2024-12-13 05:45:35.889888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.889893] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.889897] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.930594] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.049 [2024-12-13 05:45:35.930604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.049 [2024-12-13 05:45:35.930607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.049 [2024-12-13 05:45:35.930610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa540) on tqpair=0x1d4ede0 00:29:36.049 ===================================================== 00:29:36.049 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:36.049 ===================================================== 00:29:36.049 Controller Capabilities/Features 00:29:36.049 ================================ 00:29:36.049 Vendor ID: 0000 00:29:36.049 Subsystem Vendor ID: 0000 00:29:36.049 Serial Number: .................... 00:29:36.049 Model Number: ........................................ 00:29:36.049 Firmware Version: 25.01 00:29:36.049 Recommended Arb Burst: 0 00:29:36.049 IEEE OUI Identifier: 00 00 00 00:29:36.049 Multi-path I/O 00:29:36.049 May have multiple subsystem ports: No 00:29:36.049 May have multiple controllers: No 00:29:36.049 Associated with SR-IOV VF: No 00:29:36.049 Max Data Transfer Size: 131072 00:29:36.049 Max Number of Namespaces: 0 00:29:36.049 Max Number of I/O Queues: 1024 00:29:36.049 NVMe Specification Version (VS): 1.3 00:29:36.049 NVMe Specification Version (Identify): 1.3 00:29:36.049 Maximum Queue Entries: 128 00:29:36.049 Contiguous Queues Required: Yes 00:29:36.049 Arbitration Mechanisms Supported 00:29:36.049 Weighted Round Robin: Not Supported 00:29:36.049 Vendor Specific: Not Supported 00:29:36.049 Reset Timeout: 15000 ms 00:29:36.049 Doorbell Stride: 4 bytes 00:29:36.049 NVM Subsystem Reset: Not Supported 00:29:36.049 Command Sets Supported 00:29:36.049 NVM Command Set: Supported 00:29:36.049 Boot Partition: Not Supported 00:29:36.049 Memory Page Size Minimum: 4096 bytes 00:29:36.049 Memory Page Size Maximum: 4096 bytes 00:29:36.049 Persistent Memory Region: Not Supported 00:29:36.049 Optional Asynchronous Events Supported 00:29:36.049 Namespace Attribute Notices: Not Supported 00:29:36.049 Firmware Activation Notices: Not Supported 00:29:36.049 ANA Change Notices: Not Supported 00:29:36.049 PLE Aggregate Log Change Notices: Not Supported 00:29:36.049 LBA Status Info Alert Notices: Not Supported 00:29:36.049 EGE Aggregate Log Change Notices: Not Supported 00:29:36.049 Normal NVM Subsystem Shutdown event: Not Supported 00:29:36.049 Zone Descriptor Change Notices: Not Supported 00:29:36.049 Discovery Log Change Notices: Supported 00:29:36.049 Controller Attributes 00:29:36.049 128-bit Host Identifier: Not Supported 00:29:36.049 Non-Operational Permissive Mode: Not Supported 00:29:36.049 NVM Sets: Not Supported 00:29:36.049 Read Recovery Levels: Not Supported 00:29:36.049 Endurance Groups: Not Supported 00:29:36.049 Predictable Latency Mode: Not Supported 00:29:36.050 Traffic Based Keep ALive: Not Supported 00:29:36.050 Namespace Granularity: Not Supported 00:29:36.050 SQ Associations: Not Supported 00:29:36.050 UUID List: Not Supported 00:29:36.050 Multi-Domain Subsystem: Not Supported 00:29:36.050 Fixed Capacity Management: Not Supported 00:29:36.050 Variable Capacity Management: Not Supported 00:29:36.050 Delete Endurance Group: Not Supported 00:29:36.050 Delete NVM Set: Not Supported 00:29:36.050 Extended LBA Formats Supported: Not Supported 00:29:36.050 Flexible Data Placement Supported: Not Supported 00:29:36.050 00:29:36.050 Controller Memory Buffer Support 00:29:36.050 ================================ 00:29:36.050 Supported: No 00:29:36.050 00:29:36.050 Persistent Memory Region Support 00:29:36.050 ================================ 00:29:36.050 Supported: No 00:29:36.050 00:29:36.050 Admin Command Set Attributes 00:29:36.050 ============================ 00:29:36.050 Security Send/Receive: Not Supported 00:29:36.050 Format NVM: Not Supported 00:29:36.050 Firmware Activate/Download: Not Supported 00:29:36.050 Namespace Management: Not Supported 00:29:36.050 Device Self-Test: Not Supported 00:29:36.050 Directives: Not Supported 00:29:36.050 NVMe-MI: Not Supported 00:29:36.050 Virtualization Management: Not Supported 00:29:36.050 Doorbell Buffer Config: Not Supported 00:29:36.050 Get LBA Status Capability: Not Supported 00:29:36.050 Command & Feature Lockdown Capability: Not Supported 00:29:36.050 Abort Command Limit: 1 00:29:36.050 Async Event Request Limit: 4 00:29:36.050 Number of Firmware Slots: N/A 00:29:36.050 Firmware Slot 1 Read-Only: N/A 00:29:36.050 Firmware Activation Without Reset: N/A 00:29:36.050 Multiple Update Detection Support: N/A 00:29:36.050 Firmware Update Granularity: No Information Provided 00:29:36.050 Per-Namespace SMART Log: No 00:29:36.050 Asymmetric Namespace Access Log Page: Not Supported 00:29:36.050 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:36.050 Command Effects Log Page: Not Supported 00:29:36.050 Get Log Page Extended Data: Supported 00:29:36.050 Telemetry Log Pages: Not Supported 00:29:36.050 Persistent Event Log Pages: Not Supported 00:29:36.050 Supported Log Pages Log Page: May Support 00:29:36.050 Commands Supported & Effects Log Page: Not Supported 00:29:36.050 Feature Identifiers & Effects Log Page:May Support 00:29:36.050 NVMe-MI Commands & Effects Log Page: May Support 00:29:36.050 Data Area 4 for Telemetry Log: Not Supported 00:29:36.050 Error Log Page Entries Supported: 128 00:29:36.050 Keep Alive: Not Supported 00:29:36.050 00:29:36.050 NVM Command Set Attributes 00:29:36.050 ========================== 00:29:36.050 Submission Queue Entry Size 00:29:36.050 Max: 1 00:29:36.050 Min: 1 00:29:36.050 Completion Queue Entry Size 00:29:36.050 Max: 1 00:29:36.050 Min: 1 00:29:36.050 Number of Namespaces: 0 00:29:36.050 Compare Command: Not Supported 00:29:36.050 Write Uncorrectable Command: Not Supported 00:29:36.050 Dataset Management Command: Not Supported 00:29:36.050 Write Zeroes Command: Not Supported 00:29:36.050 Set Features Save Field: Not Supported 00:29:36.050 Reservations: Not Supported 00:29:36.050 Timestamp: Not Supported 00:29:36.050 Copy: Not Supported 00:29:36.050 Volatile Write Cache: Not Present 00:29:36.050 Atomic Write Unit (Normal): 1 00:29:36.050 Atomic Write Unit (PFail): 1 00:29:36.050 Atomic Compare & Write Unit: 1 00:29:36.050 Fused Compare & Write: Supported 00:29:36.050 Scatter-Gather List 00:29:36.050 SGL Command Set: Supported 00:29:36.050 SGL Keyed: Supported 00:29:36.050 SGL Bit Bucket Descriptor: Not Supported 00:29:36.050 SGL Metadata Pointer: Not Supported 00:29:36.050 Oversized SGL: Not Supported 00:29:36.050 SGL Metadata Address: Not Supported 00:29:36.050 SGL Offset: Supported 00:29:36.050 Transport SGL Data Block: Not Supported 00:29:36.050 Replay Protected Memory Block: Not Supported 00:29:36.050 00:29:36.050 Firmware Slot Information 00:29:36.050 ========================= 00:29:36.050 Active slot: 0 00:29:36.050 00:29:36.050 00:29:36.050 Error Log 00:29:36.050 ========= 00:29:36.050 00:29:36.050 Active Namespaces 00:29:36.050 ================= 00:29:36.050 Discovery Log Page 00:29:36.050 ================== 00:29:36.050 Generation Counter: 2 00:29:36.050 Number of Records: 2 00:29:36.050 Record Format: 0 00:29:36.050 00:29:36.050 Discovery Log Entry 0 00:29:36.050 ---------------------- 00:29:36.050 Transport Type: 3 (TCP) 00:29:36.050 Address Family: 1 (IPv4) 00:29:36.050 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:36.050 Entry Flags: 00:29:36.050 Duplicate Returned Information: 1 00:29:36.050 Explicit Persistent Connection Support for Discovery: 1 00:29:36.050 Transport Requirements: 00:29:36.050 Secure Channel: Not Required 00:29:36.050 Port ID: 0 (0x0000) 00:29:36.050 Controller ID: 65535 (0xffff) 00:29:36.050 Admin Max SQ Size: 128 00:29:36.050 Transport Service Identifier: 4420 00:29:36.050 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:36.050 Transport Address: 10.0.0.2 00:29:36.050 Discovery Log Entry 1 00:29:36.050 ---------------------- 00:29:36.050 Transport Type: 3 (TCP) 00:29:36.050 Address Family: 1 (IPv4) 00:29:36.050 Subsystem Type: 2 (NVM Subsystem) 00:29:36.050 Entry Flags: 00:29:36.050 Duplicate Returned Information: 0 00:29:36.050 Explicit Persistent Connection Support for Discovery: 0 00:29:36.050 Transport Requirements: 00:29:36.050 Secure Channel: Not Required 00:29:36.050 Port ID: 0 (0x0000) 00:29:36.050 Controller ID: 65535 (0xffff) 00:29:36.050 Admin Max SQ Size: 128 00:29:36.050 Transport Service Identifier: 4420 00:29:36.050 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:36.050 Transport Address: 10.0.0.2 [2024-12-13 05:45:35.930687] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:36.050 [2024-12-13 05:45:35.930698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1da9f40) on tqpair=0x1d4ede0 00:29:36.050 [2024-12-13 05:45:35.930704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.050 [2024-12-13 05:45:35.930709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa0c0) on tqpair=0x1d4ede0 00:29:36.050 [2024-12-13 05:45:35.930713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.050 [2024-12-13 05:45:35.930717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa240) on tqpair=0x1d4ede0 00:29:36.050 [2024-12-13 05:45:35.930721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.050 [2024-12-13 05:45:35.930725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.050 [2024-12-13 05:45:35.930729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.050 [2024-12-13 05:45:35.930737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.050 [2024-12-13 05:45:35.930740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.050 [2024-12-13 05:45:35.930745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.050 [2024-12-13 05:45:35.930752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.051 [2024-12-13 05:45:35.930764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.051 [2024-12-13 05:45:35.930825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.051 [2024-12-13 05:45:35.930831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.051 [2024-12-13 05:45:35.930834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.930837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.051 [2024-12-13 05:45:35.930843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.930846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.930849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.051 [2024-12-13 05:45:35.930855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.051 [2024-12-13 05:45:35.930867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.051 [2024-12-13 05:45:35.930935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.051 [2024-12-13 05:45:35.930941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.051 [2024-12-13 05:45:35.930944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.930947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.051 [2024-12-13 05:45:35.930951] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:36.051 [2024-12-13 05:45:35.930955] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:36.051 [2024-12-13 05:45:35.930962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.930966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.930969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.051 [2024-12-13 05:45:35.930974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.051 [2024-12-13 05:45:35.930983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.051 [2024-12-13 05:45:35.931057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.051 [2024-12-13 05:45:35.931062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.051 [2024-12-13 05:45:35.931065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.051 [2024-12-13 05:45:35.931077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.051 [2024-12-13 05:45:35.931089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.051 [2024-12-13 05:45:35.931098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.051 [2024-12-13 05:45:35.931192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.051 [2024-12-13 05:45:35.931198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.051 [2024-12-13 05:45:35.931200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.051 [2024-12-13 05:45:35.931214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.051 [2024-12-13 05:45:35.931226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.051 [2024-12-13 05:45:35.931236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.051 [2024-12-13 05:45:35.931295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.051 [2024-12-13 05:45:35.931300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.051 [2024-12-13 05:45:35.931303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.051 [2024-12-13 05:45:35.931314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.051 [2024-12-13 05:45:35.931326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.051 [2024-12-13 05:45:35.931335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.051 [2024-12-13 05:45:35.931397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.051 [2024-12-13 05:45:35.931403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.051 [2024-12-13 05:45:35.931405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.051 [2024-12-13 05:45:35.931417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.051 [2024-12-13 05:45:35.931429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.051 [2024-12-13 05:45:35.931438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.051 [2024-12-13 05:45:35.931507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.051 [2024-12-13 05:45:35.931513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.051 [2024-12-13 05:45:35.931516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.051 [2024-12-13 05:45:35.931527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.051 [2024-12-13 05:45:35.931539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.051 [2024-12-13 05:45:35.931549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.051 [2024-12-13 05:45:35.931608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.051 [2024-12-13 05:45:35.931613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.051 [2024-12-13 05:45:35.931616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.051 [2024-12-13 05:45:35.931627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.051 [2024-12-13 05:45:35.931641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.051 [2024-12-13 05:45:35.931650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.051 [2024-12-13 05:45:35.931724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.051 [2024-12-13 05:45:35.931729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.051 [2024-12-13 05:45:35.931732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.051 [2024-12-13 05:45:35.931743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.051 [2024-12-13 05:45:35.931755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.051 [2024-12-13 05:45:35.931764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.051 [2024-12-13 05:45:35.931826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.051 [2024-12-13 05:45:35.931832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.051 [2024-12-13 05:45:35.931835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.051 [2024-12-13 05:45:35.931846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.051 [2024-12-13 05:45:35.931858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.051 [2024-12-13 05:45:35.931867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.051 [2024-12-13 05:45:35.931926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.051 [2024-12-13 05:45:35.931931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.051 [2024-12-13 05:45:35.931934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.051 [2024-12-13 05:45:35.931945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.931952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.051 [2024-12-13 05:45:35.931957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.051 [2024-12-13 05:45:35.931966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.051 [2024-12-13 05:45:35.932043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.051 [2024-12-13 05:45:35.932049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.051 [2024-12-13 05:45:35.932051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.932055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.051 [2024-12-13 05:45:35.932063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.932066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.051 [2024-12-13 05:45:35.932070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.052 [2024-12-13 05:45:35.932076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.052 [2024-12-13 05:45:35.932085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.052 [2024-12-13 05:45:35.932160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.052 [2024-12-13 05:45:35.932165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.052 [2024-12-13 05:45:35.932168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.052 [2024-12-13 05:45:35.932179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.052 [2024-12-13 05:45:35.932191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.052 [2024-12-13 05:45:35.932200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.052 [2024-12-13 05:45:35.932267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.052 [2024-12-13 05:45:35.932272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.052 [2024-12-13 05:45:35.932275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.052 [2024-12-13 05:45:35.932287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.052 [2024-12-13 05:45:35.932299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.052 [2024-12-13 05:45:35.932308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.052 [2024-12-13 05:45:35.932368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.052 [2024-12-13 05:45:35.932374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.052 [2024-12-13 05:45:35.932377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.052 [2024-12-13 05:45:35.932387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.052 [2024-12-13 05:45:35.932399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.052 [2024-12-13 05:45:35.932408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.052 [2024-12-13 05:45:35.932487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.052 [2024-12-13 05:45:35.932493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.052 [2024-12-13 05:45:35.932496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.052 [2024-12-13 05:45:35.932507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.052 [2024-12-13 05:45:35.932521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.052 [2024-12-13 05:45:35.932541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.052 [2024-12-13 05:45:35.932612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.052 [2024-12-13 05:45:35.932618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.052 [2024-12-13 05:45:35.932621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.052 [2024-12-13 05:45:35.932632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.052 [2024-12-13 05:45:35.932644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.052 [2024-12-13 05:45:35.932653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.052 [2024-12-13 05:45:35.932712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.052 [2024-12-13 05:45:35.932718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.052 [2024-12-13 05:45:35.932721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.052 [2024-12-13 05:45:35.932732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.052 [2024-12-13 05:45:35.932744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.052 [2024-12-13 05:45:35.932753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.052 [2024-12-13 05:45:35.932810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.052 [2024-12-13 05:45:35.932816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.052 [2024-12-13 05:45:35.932819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.052 [2024-12-13 05:45:35.932829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.052 [2024-12-13 05:45:35.932841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.052 [2024-12-13 05:45:35.932850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.052 [2024-12-13 05:45:35.932909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.052 [2024-12-13 05:45:35.932915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.052 [2024-12-13 05:45:35.932918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.052 [2024-12-13 05:45:35.932929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.932935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.052 [2024-12-13 05:45:35.932940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.052 [2024-12-13 05:45:35.932952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.052 [2024-12-13 05:45:35.933008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.052 [2024-12-13 05:45:35.933014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.052 [2024-12-13 05:45:35.933017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.933020] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.052 [2024-12-13 05:45:35.933028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.933031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.933034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.052 [2024-12-13 05:45:35.933039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.052 [2024-12-13 05:45:35.933048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.052 [2024-12-13 05:45:35.933107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.052 [2024-12-13 05:45:35.933113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.052 [2024-12-13 05:45:35.933116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.933119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.052 [2024-12-13 05:45:35.933127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.933130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.933133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.052 [2024-12-13 05:45:35.933138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.052 [2024-12-13 05:45:35.933147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.052 [2024-12-13 05:45:35.933209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.052 [2024-12-13 05:45:35.933215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.052 [2024-12-13 05:45:35.933218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.933221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.052 [2024-12-13 05:45:35.933229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.933232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.933235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.052 [2024-12-13 05:45:35.933240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.052 [2024-12-13 05:45:35.933249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.052 [2024-12-13 05:45:35.933311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.052 [2024-12-13 05:45:35.933316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.052 [2024-12-13 05:45:35.933319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.933323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.052 [2024-12-13 05:45:35.933330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.933334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.052 [2024-12-13 05:45:35.933337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.052 [2024-12-13 05:45:35.933342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.053 [2024-12-13 05:45:35.933353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.053 [2024-12-13 05:45:35.933428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.053 [2024-12-13 05:45:35.933433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.053 [2024-12-13 05:45:35.933436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:35.933439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.053 [2024-12-13 05:45:35.937453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:35.937459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:35.937463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4ede0) 00:29:36.053 [2024-12-13 05:45:35.937468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.053 [2024-12-13 05:45:35.937479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa3c0, cid 3, qid 0 00:29:36.053 [2024-12-13 05:45:35.937640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.053 [2024-12-13 05:45:35.937646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.053 [2024-12-13 05:45:35.937649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:35.937652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa3c0) on tqpair=0x1d4ede0 00:29:36.053 [2024-12-13 05:45:35.937659] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:29:36.053 00:29:36.053 05:45:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:36.053 [2024-12-13 05:45:35.974628] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:36.053 [2024-12-13 05:45:35.974677] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450089 ] 00:29:36.053 [2024-12-13 05:45:36.011101] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:36.053 [2024-12-13 05:45:36.011141] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:36.053 [2024-12-13 05:45:36.011146] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:36.053 [2024-12-13 05:45:36.011157] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:36.053 [2024-12-13 05:45:36.011164] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:36.053 [2024-12-13 05:45:36.018592] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:36.053 [2024-12-13 05:45:36.018621] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x857de0 0 00:29:36.053 [2024-12-13 05:45:36.018782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:36.053 [2024-12-13 05:45:36.018789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:36.053 [2024-12-13 05:45:36.018793] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:36.053 [2024-12-13 05:45:36.018795] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:36.053 [2024-12-13 05:45:36.018812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.018817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.018820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x857de0) 00:29:36.053 [2024-12-13 05:45:36.018831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:36.053 [2024-12-13 05:45:36.018843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b2f40, cid 0, qid 0 00:29:36.053 [2024-12-13 05:45:36.026460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.053 [2024-12-13 05:45:36.026469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.053 [2024-12-13 05:45:36.026472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.026476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b2f40) on tqpair=0x857de0 00:29:36.053 [2024-12-13 05:45:36.026484] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:36.053 [2024-12-13 05:45:36.026489] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:36.053 [2024-12-13 05:45:36.026494] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:36.053 [2024-12-13 05:45:36.026503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.026506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.026509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x857de0) 00:29:36.053 [2024-12-13 05:45:36.026515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.053 [2024-12-13 05:45:36.026528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b2f40, cid 0, qid 0 00:29:36.053 [2024-12-13 05:45:36.026682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.053 [2024-12-13 05:45:36.026688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.053 [2024-12-13 05:45:36.026691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.026695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b2f40) on tqpair=0x857de0 00:29:36.053 [2024-12-13 05:45:36.026699] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:36.053 [2024-12-13 05:45:36.026705] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:36.053 [2024-12-13 05:45:36.026711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.026715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.026718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x857de0) 00:29:36.053 [2024-12-13 05:45:36.026723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.053 [2024-12-13 05:45:36.026732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b2f40, cid 0, qid 0 00:29:36.053 [2024-12-13 05:45:36.026793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.053 [2024-12-13 05:45:36.026799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.053 [2024-12-13 05:45:36.026801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.026804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b2f40) on tqpair=0x857de0 00:29:36.053 [2024-12-13 05:45:36.026808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:36.053 [2024-12-13 05:45:36.026815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:36.053 [2024-12-13 05:45:36.026821] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.026824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.026827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x857de0) 00:29:36.053 [2024-12-13 05:45:36.026835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.053 [2024-12-13 05:45:36.026844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b2f40, cid 0, qid 0 00:29:36.053 [2024-12-13 05:45:36.026904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.053 [2024-12-13 05:45:36.026909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.053 [2024-12-13 05:45:36.026913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.026916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b2f40) on tqpair=0x857de0 00:29:36.053 [2024-12-13 05:45:36.026920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:36.053 [2024-12-13 05:45:36.026927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.026931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.026934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x857de0) 00:29:36.053 [2024-12-13 05:45:36.026939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.053 [2024-12-13 05:45:36.026949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b2f40, cid 0, qid 0 00:29:36.053 [2024-12-13 05:45:36.027013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.053 [2024-12-13 05:45:36.027018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.053 [2024-12-13 05:45:36.027021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.027024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b2f40) on tqpair=0x857de0 00:29:36.053 [2024-12-13 05:45:36.027028] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:36.053 [2024-12-13 05:45:36.027032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:36.053 [2024-12-13 05:45:36.027039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:36.053 [2024-12-13 05:45:36.027146] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:36.053 [2024-12-13 05:45:36.027150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:36.053 [2024-12-13 05:45:36.027156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.027160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.027163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x857de0) 00:29:36.053 [2024-12-13 05:45:36.027168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.053 [2024-12-13 05:45:36.027178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b2f40, cid 0, qid 0 00:29:36.053 [2024-12-13 05:45:36.027239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.053 [2024-12-13 05:45:36.027244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.053 [2024-12-13 05:45:36.027247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.027250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b2f40) on tqpair=0x857de0 00:29:36.053 [2024-12-13 05:45:36.027254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:36.053 [2024-12-13 05:45:36.027262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.027265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.053 [2024-12-13 05:45:36.027269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x857de0) 00:29:36.053 [2024-12-13 05:45:36.027276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.054 [2024-12-13 05:45:36.027285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b2f40, cid 0, qid 0 00:29:36.054 [2024-12-13 05:45:36.027345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.054 [2024-12-13 05:45:36.027351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.054 [2024-12-13 05:45:36.027354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b2f40) on tqpair=0x857de0 00:29:36.054 [2024-12-13 05:45:36.027360] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:36.054 [2024-12-13 05:45:36.027364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:36.054 [2024-12-13 05:45:36.027371] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:36.054 [2024-12-13 05:45:36.027380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:36.054 [2024-12-13 05:45:36.027387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x857de0) 00:29:36.054 [2024-12-13 05:45:36.027396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.054 [2024-12-13 05:45:36.027406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b2f40, cid 0, qid 0 00:29:36.054 [2024-12-13 05:45:36.027500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.054 [2024-12-13 05:45:36.027506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.054 [2024-12-13 05:45:36.027509] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027512] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x857de0): datao=0, datal=4096, cccid=0 00:29:36.054 [2024-12-13 05:45:36.027516] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8b2f40) on tqpair(0x857de0): expected_datao=0, payload_size=4096 00:29:36.054 [2024-12-13 05:45:36.027520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027526] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027529] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.054 [2024-12-13 05:45:36.027551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.054 [2024-12-13 05:45:36.027554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b2f40) on tqpair=0x857de0 00:29:36.054 [2024-12-13 05:45:36.027563] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:36.054 [2024-12-13 05:45:36.027567] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:36.054 [2024-12-13 05:45:36.027571] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:36.054 [2024-12-13 05:45:36.027575] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:36.054 [2024-12-13 05:45:36.027579] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:36.054 [2024-12-13 05:45:36.027583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:36.054 [2024-12-13 05:45:36.027594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:36.054 [2024-12-13 05:45:36.027601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x857de0) 00:29:36.054 [2024-12-13 05:45:36.027614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:36.054 [2024-12-13 05:45:36.027625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b2f40, cid 0, qid 0 00:29:36.054 [2024-12-13 05:45:36.027691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.054 [2024-12-13 05:45:36.027697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.054 [2024-12-13 05:45:36.027700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b2f40) on tqpair=0x857de0 00:29:36.054 [2024-12-13 05:45:36.027708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x857de0) 00:29:36.054 [2024-12-13 05:45:36.027719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.054 [2024-12-13 05:45:36.027724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x857de0) 00:29:36.054 [2024-12-13 05:45:36.027735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.054 [2024-12-13 05:45:36.027740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x857de0) 00:29:36.054 [2024-12-13 05:45:36.027751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.054 [2024-12-13 05:45:36.027756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x857de0) 00:29:36.054 [2024-12-13 05:45:36.027767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.054 [2024-12-13 05:45:36.027771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:36.054 [2024-12-13 05:45:36.027780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:36.054 [2024-12-13 05:45:36.027786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x857de0) 00:29:36.054 [2024-12-13 05:45:36.027794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.054 [2024-12-13 05:45:36.027805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b2f40, cid 0, qid 0 00:29:36.054 [2024-12-13 05:45:36.027809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b30c0, cid 1, qid 0 00:29:36.054 [2024-12-13 05:45:36.027813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b3240, cid 2, qid 0 00:29:36.054 [2024-12-13 05:45:36.027820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b33c0, cid 3, qid 0 00:29:36.054 [2024-12-13 05:45:36.027825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b3540, cid 4, qid 0 00:29:36.054 [2024-12-13 05:45:36.027921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.054 [2024-12-13 05:45:36.027926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.054 [2024-12-13 05:45:36.027929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b3540) on tqpair=0x857de0 00:29:36.054 [2024-12-13 05:45:36.027937] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:36.054 [2024-12-13 05:45:36.027941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:36.054 [2024-12-13 05:45:36.027950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:36.054 [2024-12-13 05:45:36.027956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:36.054 [2024-12-13 05:45:36.027961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.027967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x857de0) 00:29:36.054 [2024-12-13 05:45:36.027973] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:36.054 [2024-12-13 05:45:36.027982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b3540, cid 4, qid 0 00:29:36.054 [2024-12-13 05:45:36.028047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.054 [2024-12-13 05:45:36.028052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.054 [2024-12-13 05:45:36.028055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.028058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b3540) on tqpair=0x857de0 00:29:36.054 [2024-12-13 05:45:36.028106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:36.054 [2024-12-13 05:45:36.028115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:36.054 [2024-12-13 05:45:36.028121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.054 [2024-12-13 05:45:36.028124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x857de0) 00:29:36.055 [2024-12-13 05:45:36.028129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.055 [2024-12-13 05:45:36.028138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b3540, cid 4, qid 0 00:29:36.055 [2024-12-13 05:45:36.028215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.055 [2024-12-13 05:45:36.028221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.055 [2024-12-13 05:45:36.028224] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028227] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x857de0): datao=0, datal=4096, cccid=4 00:29:36.055 [2024-12-13 05:45:36.028231] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8b3540) on tqpair(0x857de0): expected_datao=0, payload_size=4096 00:29:36.055 [2024-12-13 05:45:36.028235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028240] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028244] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.055 [2024-12-13 05:45:36.028262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.055 [2024-12-13 05:45:36.028265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028268] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b3540) on tqpair=0x857de0 00:29:36.055 [2024-12-13 05:45:36.028278] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:36.055 [2024-12-13 05:45:36.028286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:36.055 [2024-12-13 05:45:36.028294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:36.055 [2024-12-13 05:45:36.028300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x857de0) 00:29:36.055 [2024-12-13 05:45:36.028308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.055 [2024-12-13 05:45:36.028319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b3540, cid 4, qid 0 00:29:36.055 [2024-12-13 05:45:36.028428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.055 [2024-12-13 05:45:36.028433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.055 [2024-12-13 05:45:36.028436] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028439] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x857de0): datao=0, datal=4096, cccid=4 00:29:36.055 [2024-12-13 05:45:36.028443] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8b3540) on tqpair(0x857de0): expected_datao=0, payload_size=4096 00:29:36.055 [2024-12-13 05:45:36.028446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028458] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028461] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.055 [2024-12-13 05:45:36.028480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.055 [2024-12-13 05:45:36.028483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b3540) on tqpair=0x857de0 00:29:36.055 [2024-12-13 05:45:36.028495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:36.055 [2024-12-13 05:45:36.028504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:36.055 [2024-12-13 05:45:36.028510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x857de0) 00:29:36.055 [2024-12-13 05:45:36.028518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.055 [2024-12-13 05:45:36.028529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b3540, cid 4, qid 0 00:29:36.055 [2024-12-13 05:45:36.028600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.055 [2024-12-13 05:45:36.028606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.055 [2024-12-13 05:45:36.028609] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028612] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x857de0): datao=0, datal=4096, cccid=4 00:29:36.055 [2024-12-13 05:45:36.028616] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8b3540) on tqpair(0x857de0): expected_datao=0, payload_size=4096 00:29:36.055 [2024-12-13 05:45:36.028621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028626] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028630] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.055 [2024-12-13 05:45:36.028644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.055 [2024-12-13 05:45:36.028647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b3540) on tqpair=0x857de0 00:29:36.055 [2024-12-13 05:45:36.028656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:36.055 [2024-12-13 05:45:36.028663] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:36.055 [2024-12-13 05:45:36.028670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:36.055 [2024-12-13 05:45:36.028675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:36.055 [2024-12-13 05:45:36.028679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:36.055 [2024-12-13 05:45:36.028684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:36.055 [2024-12-13 05:45:36.028688] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:36.055 [2024-12-13 05:45:36.028692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:36.055 [2024-12-13 05:45:36.028697] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:36.055 [2024-12-13 05:45:36.028710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x857de0) 00:29:36.055 [2024-12-13 05:45:36.028719] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.055 [2024-12-13 05:45:36.028724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x857de0) 00:29:36.055 [2024-12-13 05:45:36.028735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.055 [2024-12-13 05:45:36.028747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b3540, cid 4, qid 0 00:29:36.055 [2024-12-13 05:45:36.028752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b36c0, cid 5, qid 0 00:29:36.055 [2024-12-13 05:45:36.028830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.055 [2024-12-13 05:45:36.028835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.055 [2024-12-13 05:45:36.028838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b3540) on tqpair=0x857de0 00:29:36.055 [2024-12-13 05:45:36.028846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.055 [2024-12-13 05:45:36.028851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.055 [2024-12-13 05:45:36.028854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b36c0) on tqpair=0x857de0 00:29:36.055 [2024-12-13 05:45:36.028867] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x857de0) 00:29:36.055 [2024-12-13 05:45:36.028876] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.055 [2024-12-13 05:45:36.028886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b36c0, cid 5, qid 0 00:29:36.055 [2024-12-13 05:45:36.028950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.055 [2024-12-13 05:45:36.028955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.055 [2024-12-13 05:45:36.028958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b36c0) on tqpair=0x857de0 00:29:36.055 [2024-12-13 05:45:36.028969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.028972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x857de0) 00:29:36.055 [2024-12-13 05:45:36.028977] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.055 [2024-12-13 05:45:36.028987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b36c0, cid 5, qid 0 00:29:36.055 [2024-12-13 05:45:36.029049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.055 [2024-12-13 05:45:36.029055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.055 [2024-12-13 05:45:36.029058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.029061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b36c0) on tqpair=0x857de0 00:29:36.055 [2024-12-13 05:45:36.029068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.029071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x857de0) 00:29:36.055 [2024-12-13 05:45:36.029077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.055 [2024-12-13 05:45:36.029085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b36c0, cid 5, qid 0 00:29:36.055 [2024-12-13 05:45:36.029148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.055 [2024-12-13 05:45:36.029154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.055 [2024-12-13 05:45:36.029157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.055 [2024-12-13 05:45:36.029160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b36c0) on tqpair=0x857de0 00:29:36.055 [2024-12-13 05:45:36.029171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x857de0) 00:29:36.056 [2024-12-13 05:45:36.029180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.056 [2024-12-13 05:45:36.029186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x857de0) 00:29:36.056 [2024-12-13 05:45:36.029194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.056 [2024-12-13 05:45:36.029200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x857de0) 00:29:36.056 [2024-12-13 05:45:36.029209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.056 [2024-12-13 05:45:36.029216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x857de0) 00:29:36.056 [2024-12-13 05:45:36.029224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.056 [2024-12-13 05:45:36.029235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b36c0, cid 5, qid 0 00:29:36.056 [2024-12-13 05:45:36.029239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b3540, cid 4, qid 0 00:29:36.056 [2024-12-13 05:45:36.029243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b3840, cid 6, qid 0 00:29:36.056 [2024-12-13 05:45:36.029247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b39c0, cid 7, qid 0 00:29:36.056 [2024-12-13 05:45:36.029392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.056 [2024-12-13 05:45:36.029397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.056 [2024-12-13 05:45:36.029400] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029403] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x857de0): datao=0, datal=8192, cccid=5 00:29:36.056 [2024-12-13 05:45:36.029407] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8b36c0) on tqpair(0x857de0): expected_datao=0, payload_size=8192 00:29:36.056 [2024-12-13 05:45:36.029410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029424] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029427] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.056 [2024-12-13 05:45:36.029437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.056 [2024-12-13 05:45:36.029440] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029443] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x857de0): datao=0, datal=512, cccid=4 00:29:36.056 [2024-12-13 05:45:36.029446] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8b3540) on tqpair(0x857de0): expected_datao=0, payload_size=512 00:29:36.056 [2024-12-13 05:45:36.029457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029462] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029465] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.056 [2024-12-13 05:45:36.029474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.056 [2024-12-13 05:45:36.029477] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029480] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x857de0): datao=0, datal=512, cccid=6 00:29:36.056 [2024-12-13 05:45:36.029484] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8b3840) on tqpair(0x857de0): expected_datao=0, payload_size=512 00:29:36.056 [2024-12-13 05:45:36.029488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029493] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029496] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.056 [2024-12-13 05:45:36.029505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.056 [2024-12-13 05:45:36.029508] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029511] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x857de0): datao=0, datal=4096, cccid=7 00:29:36.056 [2024-12-13 05:45:36.029515] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8b39c0) on tqpair(0x857de0): expected_datao=0, payload_size=4096 00:29:36.056 [2024-12-13 05:45:36.029520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029525] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029528] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.056 [2024-12-13 05:45:36.029540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.056 [2024-12-13 05:45:36.029543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b36c0) on tqpair=0x857de0 00:29:36.056 [2024-12-13 05:45:36.029557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.056 [2024-12-13 05:45:36.029562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.056 [2024-12-13 05:45:36.029565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b3540) on tqpair=0x857de0 00:29:36.056 [2024-12-13 05:45:36.029576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.056 [2024-12-13 05:45:36.029581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.056 [2024-12-13 05:45:36.029584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b3840) on tqpair=0x857de0 00:29:36.056 [2024-12-13 05:45:36.029593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.056 [2024-12-13 05:45:36.029598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.056 [2024-12-13 05:45:36.029601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.056 [2024-12-13 05:45:36.029604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b39c0) on tqpair=0x857de0 00:29:36.056 ===================================================== 00:29:36.056 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.056 ===================================================== 00:29:36.056 Controller Capabilities/Features 00:29:36.056 ================================ 00:29:36.056 Vendor ID: 8086 00:29:36.056 Subsystem Vendor ID: 8086 00:29:36.056 Serial Number: SPDK00000000000001 00:29:36.056 Model Number: SPDK bdev Controller 00:29:36.056 Firmware Version: 25.01 00:29:36.056 Recommended Arb Burst: 6 00:29:36.056 IEEE OUI Identifier: e4 d2 5c 00:29:36.056 Multi-path I/O 00:29:36.056 May have multiple subsystem ports: Yes 00:29:36.056 May have multiple controllers: Yes 00:29:36.056 Associated with SR-IOV VF: No 00:29:36.056 Max Data Transfer Size: 131072 00:29:36.056 Max Number of Namespaces: 32 00:29:36.056 Max Number of I/O Queues: 127 00:29:36.056 NVMe Specification Version (VS): 1.3 00:29:36.056 NVMe Specification Version (Identify): 1.3 00:29:36.056 Maximum Queue Entries: 128 00:29:36.056 Contiguous Queues Required: Yes 00:29:36.056 Arbitration Mechanisms Supported 00:29:36.056 Weighted Round Robin: Not Supported 00:29:36.056 Vendor Specific: Not Supported 00:29:36.056 Reset Timeout: 15000 ms 00:29:36.056 Doorbell Stride: 4 bytes 00:29:36.056 NVM Subsystem Reset: Not Supported 00:29:36.056 Command Sets Supported 00:29:36.056 NVM Command Set: Supported 00:29:36.056 Boot Partition: Not Supported 00:29:36.056 Memory Page Size Minimum: 4096 bytes 00:29:36.056 Memory Page Size Maximum: 4096 bytes 00:29:36.056 Persistent Memory Region: Not Supported 00:29:36.056 Optional Asynchronous Events Supported 00:29:36.056 Namespace Attribute Notices: Supported 00:29:36.056 Firmware Activation Notices: Not Supported 00:29:36.056 ANA Change Notices: Not Supported 00:29:36.056 PLE Aggregate Log Change Notices: Not Supported 00:29:36.056 LBA Status Info Alert Notices: Not Supported 00:29:36.056 EGE Aggregate Log Change Notices: Not Supported 00:29:36.056 Normal NVM Subsystem Shutdown event: Not Supported 00:29:36.056 Zone Descriptor Change Notices: Not Supported 00:29:36.056 Discovery Log Change Notices: Not Supported 00:29:36.056 Controller Attributes 00:29:36.056 128-bit Host Identifier: Supported 00:29:36.056 Non-Operational Permissive Mode: Not Supported 00:29:36.056 NVM Sets: Not Supported 00:29:36.056 Read Recovery Levels: Not Supported 00:29:36.056 Endurance Groups: Not Supported 00:29:36.056 Predictable Latency Mode: Not Supported 00:29:36.056 Traffic Based Keep ALive: Not Supported 00:29:36.056 Namespace Granularity: Not Supported 00:29:36.056 SQ Associations: Not Supported 00:29:36.056 UUID List: Not Supported 00:29:36.056 Multi-Domain Subsystem: Not Supported 00:29:36.056 Fixed Capacity Management: Not Supported 00:29:36.056 Variable Capacity Management: Not Supported 00:29:36.056 Delete Endurance Group: Not Supported 00:29:36.056 Delete NVM Set: Not Supported 00:29:36.056 Extended LBA Formats Supported: Not Supported 00:29:36.056 Flexible Data Placement Supported: Not Supported 00:29:36.056 00:29:36.056 Controller Memory Buffer Support 00:29:36.056 ================================ 00:29:36.056 Supported: No 00:29:36.056 00:29:36.056 Persistent Memory Region Support 00:29:36.056 ================================ 00:29:36.056 Supported: No 00:29:36.056 00:29:36.056 Admin Command Set Attributes 00:29:36.056 ============================ 00:29:36.056 Security Send/Receive: Not Supported 00:29:36.056 Format NVM: Not Supported 00:29:36.056 Firmware Activate/Download: Not Supported 00:29:36.056 Namespace Management: Not Supported 00:29:36.056 Device Self-Test: Not Supported 00:29:36.056 Directives: Not Supported 00:29:36.056 NVMe-MI: Not Supported 00:29:36.056 Virtualization Management: Not Supported 00:29:36.056 Doorbell Buffer Config: Not Supported 00:29:36.057 Get LBA Status Capability: Not Supported 00:29:36.057 Command & Feature Lockdown Capability: Not Supported 00:29:36.057 Abort Command Limit: 4 00:29:36.057 Async Event Request Limit: 4 00:29:36.057 Number of Firmware Slots: N/A 00:29:36.057 Firmware Slot 1 Read-Only: N/A 00:29:36.057 Firmware Activation Without Reset: N/A 00:29:36.057 Multiple Update Detection Support: N/A 00:29:36.057 Firmware Update Granularity: No Information Provided 00:29:36.057 Per-Namespace SMART Log: No 00:29:36.057 Asymmetric Namespace Access Log Page: Not Supported 00:29:36.057 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:36.057 Command Effects Log Page: Supported 00:29:36.057 Get Log Page Extended Data: Supported 00:29:36.057 Telemetry Log Pages: Not Supported 00:29:36.057 Persistent Event Log Pages: Not Supported 00:29:36.057 Supported Log Pages Log Page: May Support 00:29:36.057 Commands Supported & Effects Log Page: Not Supported 00:29:36.057 Feature Identifiers & Effects Log Page:May Support 00:29:36.057 NVMe-MI Commands & Effects Log Page: May Support 00:29:36.057 Data Area 4 for Telemetry Log: Not Supported 00:29:36.057 Error Log Page Entries Supported: 128 00:29:36.057 Keep Alive: Supported 00:29:36.057 Keep Alive Granularity: 10000 ms 00:29:36.057 00:29:36.057 NVM Command Set Attributes 00:29:36.057 ========================== 00:29:36.057 Submission Queue Entry Size 00:29:36.057 Max: 64 00:29:36.057 Min: 64 00:29:36.057 Completion Queue Entry Size 00:29:36.057 Max: 16 00:29:36.057 Min: 16 00:29:36.057 Number of Namespaces: 32 00:29:36.057 Compare Command: Supported 00:29:36.057 Write Uncorrectable Command: Not Supported 00:29:36.057 Dataset Management Command: Supported 00:29:36.057 Write Zeroes Command: Supported 00:29:36.057 Set Features Save Field: Not Supported 00:29:36.057 Reservations: Supported 00:29:36.057 Timestamp: Not Supported 00:29:36.057 Copy: Supported 00:29:36.057 Volatile Write Cache: Present 00:29:36.057 Atomic Write Unit (Normal): 1 00:29:36.057 Atomic Write Unit (PFail): 1 00:29:36.057 Atomic Compare & Write Unit: 1 00:29:36.057 Fused Compare & Write: Supported 00:29:36.057 Scatter-Gather List 00:29:36.057 SGL Command Set: Supported 00:29:36.057 SGL Keyed: Supported 00:29:36.057 SGL Bit Bucket Descriptor: Not Supported 00:29:36.057 SGL Metadata Pointer: Not Supported 00:29:36.057 Oversized SGL: Not Supported 00:29:36.057 SGL Metadata Address: Not Supported 00:29:36.057 SGL Offset: Supported 00:29:36.057 Transport SGL Data Block: Not Supported 00:29:36.057 Replay Protected Memory Block: Not Supported 00:29:36.057 00:29:36.057 Firmware Slot Information 00:29:36.057 ========================= 00:29:36.057 Active slot: 1 00:29:36.057 Slot 1 Firmware Revision: 25.01 00:29:36.057 00:29:36.057 00:29:36.057 Commands Supported and Effects 00:29:36.057 ============================== 00:29:36.057 Admin Commands 00:29:36.057 -------------- 00:29:36.057 Get Log Page (02h): Supported 00:29:36.057 Identify (06h): Supported 00:29:36.057 Abort (08h): Supported 00:29:36.057 Set Features (09h): Supported 00:29:36.057 Get Features (0Ah): Supported 00:29:36.057 Asynchronous Event Request (0Ch): Supported 00:29:36.057 Keep Alive (18h): Supported 00:29:36.057 I/O Commands 00:29:36.057 ------------ 00:29:36.057 Flush (00h): Supported LBA-Change 00:29:36.057 Write (01h): Supported LBA-Change 00:29:36.057 Read (02h): Supported 00:29:36.057 Compare (05h): Supported 00:29:36.057 Write Zeroes (08h): Supported LBA-Change 00:29:36.057 Dataset Management (09h): Supported LBA-Change 00:29:36.057 Copy (19h): Supported LBA-Change 00:29:36.057 00:29:36.057 Error Log 00:29:36.057 ========= 00:29:36.057 00:29:36.057 Arbitration 00:29:36.057 =========== 00:29:36.057 Arbitration Burst: 1 00:29:36.057 00:29:36.057 Power Management 00:29:36.057 ================ 00:29:36.057 Number of Power States: 1 00:29:36.057 Current Power State: Power State #0 00:29:36.057 Power State #0: 00:29:36.057 Max Power: 0.00 W 00:29:36.057 Non-Operational State: Operational 00:29:36.057 Entry Latency: Not Reported 00:29:36.057 Exit Latency: Not Reported 00:29:36.057 Relative Read Throughput: 0 00:29:36.057 Relative Read Latency: 0 00:29:36.057 Relative Write Throughput: 0 00:29:36.057 Relative Write Latency: 0 00:29:36.057 Idle Power: Not Reported 00:29:36.057 Active Power: Not Reported 00:29:36.057 Non-Operational Permissive Mode: Not Supported 00:29:36.057 00:29:36.057 Health Information 00:29:36.057 ================== 00:29:36.057 Critical Warnings: 00:29:36.057 Available Spare Space: OK 00:29:36.057 Temperature: OK 00:29:36.057 Device Reliability: OK 00:29:36.057 Read Only: No 00:29:36.057 Volatile Memory Backup: OK 00:29:36.057 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:36.057 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:36.057 Available Spare: 0% 00:29:36.057 Available Spare Threshold: 0% 00:29:36.057 Life Percentage Used:[2024-12-13 05:45:36.029681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.057 [2024-12-13 05:45:36.029686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x857de0) 00:29:36.057 [2024-12-13 05:45:36.029691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.057 [2024-12-13 05:45:36.029703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b39c0, cid 7, qid 0 00:29:36.057 [2024-12-13 05:45:36.029784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.057 [2024-12-13 05:45:36.029789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.057 [2024-12-13 05:45:36.029792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.057 [2024-12-13 05:45:36.029795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b39c0) on tqpair=0x857de0 00:29:36.057 [2024-12-13 05:45:36.029822] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:36.057 [2024-12-13 05:45:36.029831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b2f40) on tqpair=0x857de0 00:29:36.057 [2024-12-13 05:45:36.029836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.057 [2024-12-13 05:45:36.029840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b30c0) on tqpair=0x857de0 00:29:36.057 [2024-12-13 05:45:36.029844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.057 [2024-12-13 05:45:36.029849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b3240) on tqpair=0x857de0 00:29:36.057 [2024-12-13 05:45:36.029853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.057 [2024-12-13 05:45:36.029857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b33c0) on tqpair=0x857de0 00:29:36.057 [2024-12-13 05:45:36.029861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.057 [2024-12-13 05:45:36.029868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.057 [2024-12-13 05:45:36.029872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.057 [2024-12-13 05:45:36.029874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x857de0) 00:29:36.057 [2024-12-13 05:45:36.029880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.057 [2024-12-13 05:45:36.029892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b33c0, cid 3, qid 0 00:29:36.057 [2024-12-13 05:45:36.029957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.057 [2024-12-13 05:45:36.029962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.057 [2024-12-13 05:45:36.029965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.057 [2024-12-13 05:45:36.029968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b33c0) on tqpair=0x857de0 00:29:36.057 [2024-12-13 05:45:36.029974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.057 [2024-12-13 05:45:36.029977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.057 [2024-12-13 05:45:36.029980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x857de0) 00:29:36.057 [2024-12-13 05:45:36.029985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.057 [2024-12-13 05:45:36.029997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b33c0, cid 3, qid 0 00:29:36.057 [2024-12-13 05:45:36.030073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.057 [2024-12-13 05:45:36.030079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.057 [2024-12-13 05:45:36.030082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.057 [2024-12-13 05:45:36.030085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b33c0) on tqpair=0x857de0 00:29:36.057 [2024-12-13 05:45:36.030089] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:36.057 [2024-12-13 05:45:36.030093] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:36.057 [2024-12-13 05:45:36.030100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.057 [2024-12-13 05:45:36.030104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.057 [2024-12-13 05:45:36.030107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x857de0) 00:29:36.057 [2024-12-13 05:45:36.030112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.057 [2024-12-13 05:45:36.030121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b33c0, cid 3, qid 0 00:29:36.057 [2024-12-13 05:45:36.030184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.057 [2024-12-13 05:45:36.030189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.057 [2024-12-13 05:45:36.030192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.057 [2024-12-13 05:45:36.030195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b33c0) on tqpair=0x857de0 00:29:36.057 [2024-12-13 05:45:36.030203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.058 [2024-12-13 05:45:36.030207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.058 [2024-12-13 05:45:36.030210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x857de0) 00:29:36.058 [2024-12-13 05:45:36.030215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.058 [2024-12-13 05:45:36.030224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b33c0, cid 3, qid 0 00:29:36.058 [2024-12-13 05:45:36.030284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.058 [2024-12-13 05:45:36.030289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.058 [2024-12-13 05:45:36.030292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.058 [2024-12-13 05:45:36.030297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b33c0) on tqpair=0x857de0 00:29:36.058 [2024-12-13 05:45:36.030305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.058 [2024-12-13 05:45:36.030308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.058 [2024-12-13 05:45:36.030311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x857de0) 00:29:36.058 [2024-12-13 05:45:36.030317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.058 [2024-12-13 05:45:36.030326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b33c0, cid 3, qid 0 00:29:36.058 [2024-12-13 05:45:36.030384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.058 [2024-12-13 05:45:36.030389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.058 [2024-12-13 05:45:36.030392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.058 [2024-12-13 05:45:36.030395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b33c0) on tqpair=0x857de0 00:29:36.058 [2024-12-13 05:45:36.030403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.058 [2024-12-13 05:45:36.030406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.058 [2024-12-13 05:45:36.030409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x857de0) 00:29:36.058 [2024-12-13 05:45:36.030415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.058 [2024-12-13 05:45:36.030423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b33c0, cid 3, qid 0 00:29:36.058 [2024-12-13 05:45:36.034457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.058 [2024-12-13 05:45:36.034465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.058 [2024-12-13 05:45:36.034468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.058 [2024-12-13 05:45:36.034472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b33c0) on tqpair=0x857de0 00:29:36.058 [2024-12-13 05:45:36.034481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.058 [2024-12-13 05:45:36.034484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.058 [2024-12-13 05:45:36.034487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x857de0) 00:29:36.058 [2024-12-13 05:45:36.034493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.058 [2024-12-13 05:45:36.034503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b33c0, cid 3, qid 0 00:29:36.058 [2024-12-13 05:45:36.034654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.058 [2024-12-13 05:45:36.034660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.058 [2024-12-13 05:45:36.034663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.058 [2024-12-13 05:45:36.034666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b33c0) on tqpair=0x857de0 00:29:36.058 [2024-12-13 05:45:36.034672] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:29:36.058 0% 00:29:36.058 Data Units Read: 0 00:29:36.058 Data Units Written: 0 00:29:36.058 Host Read Commands: 0 00:29:36.058 Host Write Commands: 0 00:29:36.058 Controller Busy Time: 0 minutes 00:29:36.058 Power Cycles: 0 00:29:36.058 Power On Hours: 0 hours 00:29:36.058 Unsafe Shutdowns: 0 00:29:36.058 Unrecoverable Media Errors: 0 00:29:36.058 Lifetime Error Log Entries: 0 00:29:36.058 Warning Temperature Time: 0 minutes 00:29:36.058 Critical Temperature Time: 0 minutes 00:29:36.058 00:29:36.058 Number of Queues 00:29:36.058 ================ 00:29:36.058 Number of I/O Submission Queues: 127 00:29:36.058 Number of I/O Completion Queues: 127 00:29:36.058 00:29:36.058 Active Namespaces 00:29:36.058 ================= 00:29:36.058 Namespace ID:1 00:29:36.058 Error Recovery Timeout: Unlimited 00:29:36.058 Command Set Identifier: NVM (00h) 00:29:36.058 Deallocate: Supported 00:29:36.058 Deallocated/Unwritten Error: Not Supported 00:29:36.058 Deallocated Read Value: Unknown 00:29:36.058 Deallocate in Write Zeroes: Not Supported 00:29:36.058 Deallocated Guard Field: 0xFFFF 00:29:36.058 Flush: Supported 00:29:36.058 Reservation: Supported 00:29:36.058 Namespace Sharing Capabilities: Multiple Controllers 00:29:36.058 Size (in LBAs): 131072 (0GiB) 00:29:36.058 Capacity (in LBAs): 131072 (0GiB) 00:29:36.058 Utilization (in LBAs): 131072 (0GiB) 00:29:36.058 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:36.058 EUI64: ABCDEF0123456789 00:29:36.058 UUID: 13bf53aa-efa5-408a-8b1a-e830200b06d2 00:29:36.058 Thin Provisioning: Not Supported 00:29:36.058 Per-NS Atomic Units: Yes 00:29:36.058 Atomic Boundary Size (Normal): 0 00:29:36.058 Atomic Boundary Size (PFail): 0 00:29:36.058 Atomic Boundary Offset: 0 00:29:36.058 Maximum Single Source Range Length: 65535 00:29:36.058 Maximum Copy Length: 65535 00:29:36.058 Maximum Source Range Count: 1 00:29:36.058 NGUID/EUI64 Never Reused: No 00:29:36.058 Namespace Write Protected: No 00:29:36.058 Number of LBA Formats: 1 00:29:36.058 Current LBA Format: LBA Format #00 00:29:36.058 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:36.058 00:29:36.058 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:36.058 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:36.058 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.058 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.317 rmmod nvme_tcp 00:29:36.317 rmmod nvme_fabrics 00:29:36.317 rmmod nvme_keyring 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 449858 ']' 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 449858 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 449858 ']' 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 449858 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.317 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 449858 00:29:36.318 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:36.318 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:36.318 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 449858' 00:29:36.318 killing process with pid 449858 00:29:36.318 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 449858 00:29:36.318 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 449858 00:29:36.577 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.577 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:36.577 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:36.577 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:36.577 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:36.577 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:36.577 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:36.577 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.577 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.577 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.577 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.577 05:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.480 05:45:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.480 00:29:38.480 real 0m9.220s 00:29:38.480 user 0m5.144s 00:29:38.480 sys 0m4.797s 00:29:38.480 05:45:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.480 05:45:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.480 ************************************ 00:29:38.480 END TEST nvmf_identify 00:29:38.480 ************************************ 00:29:38.480 05:45:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:38.480 05:45:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:38.480 05:45:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.480 05:45:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.739 ************************************ 00:29:38.739 START TEST nvmf_perf 00:29:38.739 ************************************ 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:38.739 * Looking for test storage... 00:29:38.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.739 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:38.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.740 --rc genhtml_branch_coverage=1 00:29:38.740 --rc genhtml_function_coverage=1 00:29:38.740 --rc genhtml_legend=1 00:29:38.740 --rc geninfo_all_blocks=1 00:29:38.740 --rc geninfo_unexecuted_blocks=1 00:29:38.740 00:29:38.740 ' 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:38.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.740 --rc genhtml_branch_coverage=1 00:29:38.740 --rc genhtml_function_coverage=1 00:29:38.740 --rc genhtml_legend=1 00:29:38.740 --rc geninfo_all_blocks=1 00:29:38.740 --rc geninfo_unexecuted_blocks=1 00:29:38.740 00:29:38.740 ' 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:38.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.740 --rc genhtml_branch_coverage=1 00:29:38.740 --rc genhtml_function_coverage=1 00:29:38.740 --rc genhtml_legend=1 00:29:38.740 --rc geninfo_all_blocks=1 00:29:38.740 --rc geninfo_unexecuted_blocks=1 00:29:38.740 00:29:38.740 ' 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:38.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.740 --rc genhtml_branch_coverage=1 00:29:38.740 --rc genhtml_function_coverage=1 00:29:38.740 --rc genhtml_legend=1 00:29:38.740 --rc geninfo_all_blocks=1 00:29:38.740 --rc geninfo_unexecuted_blocks=1 00:29:38.740 00:29:38.740 ' 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:38.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.740 05:45:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.314 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:45.315 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:45.315 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:45.315 Found net devices under 0000:af:00.0: cvl_0_0 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:45.315 Found net devices under 0000:af:00.1: cvl_0_1 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:45.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:45.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:29:45.315 00:29:45.315 --- 10.0.0.2 ping statistics --- 00:29:45.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.315 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:29:45.315 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:45.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:45.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:29:45.315 00:29:45.315 --- 10.0.0.1 ping statistics --- 00:29:45.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.316 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=453558 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 453558 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 453558 ']' 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:45.316 [2024-12-13 05:45:44.668222] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:45.316 [2024-12-13 05:45:44.668271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.316 [2024-12-13 05:45:44.745738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:45.316 [2024-12-13 05:45:44.768944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.316 [2024-12-13 05:45:44.768984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.316 [2024-12-13 05:45:44.768991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.316 [2024-12-13 05:45:44.768998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.316 [2024-12-13 05:45:44.769003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.316 [2024-12-13 05:45:44.770503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.316 [2024-12-13 05:45:44.770613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:45.316 [2024-12-13 05:45:44.770701] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.316 [2024-12-13 05:45:44.770702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:45.316 05:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:48.605 05:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:48.605 05:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:48.605 05:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:29:48.605 05:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:48.605 05:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:48.605 05:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:29:48.605 05:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:48.605 05:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:48.605 05:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:48.605 [2024-12-13 05:45:48.535549] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.605 05:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:48.864 05:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:48.864 05:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:49.123 05:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:49.123 05:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:49.382 05:45:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.382 [2024-12-13 05:45:49.343905] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.382 05:45:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:49.641 05:45:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:29:49.641 05:45:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:49.641 05:45:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:49.641 05:45:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:51.018 Initializing NVMe Controllers 00:29:51.018 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:29:51.018 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:29:51.018 Initialization complete. Launching workers. 00:29:51.018 ======================================================== 00:29:51.018 Latency(us) 00:29:51.018 Device Information : IOPS MiB/s Average min max 00:29:51.018 PCIE (0000:5e:00.0) NSID 1 from core 0: 97995.17 382.79 326.01 29.69 7205.30 00:29:51.018 ======================================================== 00:29:51.018 Total : 97995.17 382.79 326.01 29.69 7205.30 00:29:51.018 00:29:51.018 05:45:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.397 Initializing NVMe Controllers 00:29:52.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:52.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:52.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:52.397 Initialization complete. Launching workers. 00:29:52.397 ======================================================== 00:29:52.397 Latency(us) 00:29:52.397 Device Information : IOPS MiB/s Average min max 00:29:52.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.00 0.31 12624.32 103.82 44686.43 00:29:52.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.00 0.20 20772.80 6988.95 49061.94 00:29:52.397 ======================================================== 00:29:52.397 Total : 130.00 0.51 15758.35 103.82 49061.94 00:29:52.397 00:29:52.397 05:45:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.774 Initializing NVMe Controllers 00:29:53.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:53.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:53.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:53.774 Initialization complete. Launching workers. 00:29:53.774 ======================================================== 00:29:53.774 Latency(us) 00:29:53.774 Device Information : IOPS MiB/s Average min max 00:29:53.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11231.00 43.87 2850.17 514.64 6217.38 00:29:53.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3801.00 14.85 8462.19 6432.68 15859.76 00:29:53.774 ======================================================== 00:29:53.774 Total : 15032.00 58.72 4269.23 514.64 15859.76 00:29:53.774 00:29:53.774 05:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:53.774 05:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:53.774 05:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:56.307 Initializing NVMe Controllers 00:29:56.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:56.307 Controller IO queue size 128, less than required. 00:29:56.307 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.307 Controller IO queue size 128, less than required. 00:29:56.307 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:56.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:56.307 Initialization complete. Launching workers. 00:29:56.307 ======================================================== 00:29:56.307 Latency(us) 00:29:56.307 Device Information : IOPS MiB/s Average min max 00:29:56.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1837.41 459.35 71085.53 46768.70 133710.80 00:29:56.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 581.47 145.37 222953.10 90994.16 338710.41 00:29:56.307 ======================================================== 00:29:56.307 Total : 2418.88 604.72 107592.76 46768.70 338710.41 00:29:56.307 00:29:56.307 05:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:56.566 No valid NVMe controllers or AIO or URING devices found 00:29:56.566 Initializing NVMe Controllers 00:29:56.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:56.566 Controller IO queue size 128, less than required. 00:29:56.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.566 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:56.566 Controller IO queue size 128, less than required. 00:29:56.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.566 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:56.566 WARNING: Some requested NVMe devices were skipped 00:29:56.566 05:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:59.855 Initializing NVMe Controllers 00:29:59.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.855 Controller IO queue size 128, less than required. 00:29:59.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:59.855 Controller IO queue size 128, less than required. 00:29:59.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:59.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:59.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:59.855 Initialization complete. Launching workers. 00:29:59.855 00:29:59.855 ==================== 00:29:59.855 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:59.855 TCP transport: 00:29:59.855 polls: 31101 00:29:59.855 idle_polls: 27134 00:29:59.855 sock_completions: 3967 00:29:59.855 nvme_completions: 6317 00:29:59.855 submitted_requests: 9366 00:29:59.855 queued_requests: 1 00:29:59.855 00:29:59.855 ==================== 00:29:59.855 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:59.855 TCP transport: 00:29:59.855 polls: 16099 00:29:59.855 idle_polls: 11761 00:29:59.855 sock_completions: 4338 00:29:59.855 nvme_completions: 7009 00:29:59.855 submitted_requests: 10454 00:29:59.855 queued_requests: 1 00:29:59.855 ======================================================== 00:29:59.855 Latency(us) 00:29:59.855 Device Information : IOPS MiB/s Average min max 00:29:59.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1578.75 394.69 83370.32 62780.99 153900.46 00:29:59.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1751.72 437.93 73268.77 40541.58 103151.45 00:29:59.855 ======================================================== 00:29:59.855 Total : 3330.48 832.62 78057.22 40541.58 153900.46 00:29:59.855 00:29:59.855 05:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:59.855 05:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:59.855 05:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:59.855 05:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:29:59.855 05:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=3b3fedfe-7587-49a7-b87e-415e175a1342 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 3b3fedfe-7587-49a7-b87e-415e175a1342 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=3b3fedfe-7587-49a7-b87e-415e175a1342 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:03.143 { 00:30:03.143 "uuid": "3b3fedfe-7587-49a7-b87e-415e175a1342", 00:30:03.143 "name": "lvs_0", 00:30:03.143 "base_bdev": "Nvme0n1", 00:30:03.143 "total_data_clusters": 238234, 00:30:03.143 "free_clusters": 238234, 00:30:03.143 "block_size": 512, 00:30:03.143 "cluster_size": 4194304 00:30:03.143 } 00:30:03.143 ]' 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="3b3fedfe-7587-49a7-b87e-415e175a1342") .free_clusters' 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="3b3fedfe-7587-49a7-b87e-415e175a1342") .cluster_size' 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:03.143 952936 00:30:03.143 05:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:03.143 05:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:03.143 05:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3b3fedfe-7587-49a7-b87e-415e175a1342 lbd_0 20480 00:30:03.402 05:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=dbfa327b-e926-4aea-961f-57ec9cc75f0d 00:30:03.402 05:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore dbfa327b-e926-4aea-961f-57ec9cc75f0d lvs_n_0 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=af406263-0ccc-4a80-a878-f45c8df4e4a4 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb af406263-0ccc-4a80-a878-f45c8df4e4a4 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=af406263-0ccc-4a80-a878-f45c8df4e4a4 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:04.338 { 00:30:04.338 "uuid": "3b3fedfe-7587-49a7-b87e-415e175a1342", 00:30:04.338 "name": "lvs_0", 00:30:04.338 "base_bdev": "Nvme0n1", 00:30:04.338 "total_data_clusters": 238234, 00:30:04.338 "free_clusters": 233114, 00:30:04.338 "block_size": 512, 00:30:04.338 "cluster_size": 4194304 00:30:04.338 }, 00:30:04.338 { 00:30:04.338 "uuid": "af406263-0ccc-4a80-a878-f45c8df4e4a4", 00:30:04.338 "name": "lvs_n_0", 00:30:04.338 "base_bdev": "dbfa327b-e926-4aea-961f-57ec9cc75f0d", 00:30:04.338 "total_data_clusters": 5114, 00:30:04.338 "free_clusters": 5114, 00:30:04.338 "block_size": 512, 00:30:04.338 "cluster_size": 4194304 00:30:04.338 } 00:30:04.338 ]' 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="af406263-0ccc-4a80-a878-f45c8df4e4a4") .free_clusters' 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="af406263-0ccc-4a80-a878-f45c8df4e4a4") .cluster_size' 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:04.338 20456 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:04.338 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u af406263-0ccc-4a80-a878-f45c8df4e4a4 lbd_nest_0 20456 00:30:04.597 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=fa4d3b05-0e7a-47bb-9f0d-1ca73ab7949d 00:30:04.597 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:04.856 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:04.856 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 fa4d3b05-0e7a-47bb-9f0d-1ca73ab7949d 00:30:05.114 05:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.373 05:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:05.373 05:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:05.373 05:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:05.373 05:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:05.373 05:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:17.575 Initializing NVMe Controllers 00:30:17.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:17.575 Initialization complete. Launching workers. 00:30:17.575 ======================================================== 00:30:17.575 Latency(us) 00:30:17.575 Device Information : IOPS MiB/s Average min max 00:30:17.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 43.59 0.02 22942.55 123.20 45768.00 00:30:17.575 ======================================================== 00:30:17.575 Total : 43.59 0.02 22942.55 123.20 45768.00 00:30:17.575 00:30:17.575 05:46:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:17.575 05:46:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:27.550 Initializing NVMe Controllers 00:30:27.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:27.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:27.550 Initialization complete. Launching workers. 00:30:27.550 ======================================================== 00:30:27.550 Latency(us) 00:30:27.550 Device Information : IOPS MiB/s Average min max 00:30:27.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 70.79 8.85 14137.09 7045.69 55865.06 00:30:27.550 ======================================================== 00:30:27.550 Total : 70.79 8.85 14137.09 7045.69 55865.06 00:30:27.550 00:30:27.550 05:46:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:27.550 05:46:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:27.551 05:46:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:37.525 Initializing NVMe Controllers 00:30:37.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:37.526 Initialization complete. Launching workers. 00:30:37.526 ======================================================== 00:30:37.526 Latency(us) 00:30:37.526 Device Information : IOPS MiB/s Average min max 00:30:37.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8592.72 4.20 3724.38 227.75 7846.24 00:30:37.526 ======================================================== 00:30:37.526 Total : 8592.72 4.20 3724.38 227.75 7846.24 00:30:37.526 00:30:37.526 05:46:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:37.526 05:46:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:47.507 Initializing NVMe Controllers 00:30:47.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:47.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:47.507 Initialization complete. Launching workers. 00:30:47.507 ======================================================== 00:30:47.507 Latency(us) 00:30:47.507 Device Information : IOPS MiB/s Average min max 00:30:47.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4402.10 550.26 7271.73 566.10 19458.81 00:30:47.507 ======================================================== 00:30:47.507 Total : 4402.10 550.26 7271.73 566.10 19458.81 00:30:47.507 00:30:47.507 05:46:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:47.507 05:46:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:47.507 05:46:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:57.483 Initializing NVMe Controllers 00:30:57.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:57.483 Controller IO queue size 128, less than required. 00:30:57.483 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:57.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:57.483 Initialization complete. Launching workers. 00:30:57.483 ======================================================== 00:30:57.483 Latency(us) 00:30:57.483 Device Information : IOPS MiB/s Average min max 00:30:57.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15857.80 7.74 8075.38 1309.26 47788.45 00:30:57.483 ======================================================== 00:30:57.483 Total : 15857.80 7.74 8075.38 1309.26 47788.45 00:30:57.483 00:30:57.483 05:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:57.483 05:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:07.459 Initializing NVMe Controllers 00:31:07.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:07.459 Controller IO queue size 128, less than required. 00:31:07.459 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:07.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:07.459 Initialization complete. Launching workers. 00:31:07.459 ======================================================== 00:31:07.459 Latency(us) 00:31:07.459 Device Information : IOPS MiB/s Average min max 00:31:07.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1206.19 150.77 106927.02 9280.50 215034.18 00:31:07.459 ======================================================== 00:31:07.459 Total : 1206.19 150.77 106927.02 9280.50 215034.18 00:31:07.459 00:31:07.459 05:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.459 05:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fa4d3b05-0e7a-47bb-9f0d-1ca73ab7949d 00:31:08.025 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:08.283 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dbfa327b-e926-4aea-961f-57ec9cc75f0d 00:31:08.542 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:08.800 rmmod nvme_tcp 00:31:08.800 rmmod nvme_fabrics 00:31:08.800 rmmod nvme_keyring 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 453558 ']' 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 453558 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 453558 ']' 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 453558 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:08.800 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453558 00:31:09.059 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:09.059 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:09.059 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453558' 00:31:09.059 killing process with pid 453558 00:31:09.059 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 453558 00:31:09.059 05:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 453558 00:31:10.439 05:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:10.439 05:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:10.439 05:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:10.439 05:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:10.439 05:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:10.439 05:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:10.439 05:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:10.439 05:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:10.439 05:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:10.439 05:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.439 05:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.440 05:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.976 05:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:12.976 00:31:12.976 real 1m33.896s 00:31:12.976 user 5m34.961s 00:31:12.976 sys 0m17.058s 00:31:12.976 05:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:12.976 05:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:12.976 ************************************ 00:31:12.976 END TEST nvmf_perf 00:31:12.976 ************************************ 00:31:12.976 05:47:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.977 ************************************ 00:31:12.977 START TEST nvmf_fio_host 00:31:12.977 ************************************ 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:12.977 * Looking for test storage... 00:31:12.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:12.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.977 --rc genhtml_branch_coverage=1 00:31:12.977 --rc genhtml_function_coverage=1 00:31:12.977 --rc genhtml_legend=1 00:31:12.977 --rc geninfo_all_blocks=1 00:31:12.977 --rc geninfo_unexecuted_blocks=1 00:31:12.977 00:31:12.977 ' 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:12.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.977 --rc genhtml_branch_coverage=1 00:31:12.977 --rc genhtml_function_coverage=1 00:31:12.977 --rc genhtml_legend=1 00:31:12.977 --rc geninfo_all_blocks=1 00:31:12.977 --rc geninfo_unexecuted_blocks=1 00:31:12.977 00:31:12.977 ' 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:12.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.977 --rc genhtml_branch_coverage=1 00:31:12.977 --rc genhtml_function_coverage=1 00:31:12.977 --rc genhtml_legend=1 00:31:12.977 --rc geninfo_all_blocks=1 00:31:12.977 --rc geninfo_unexecuted_blocks=1 00:31:12.977 00:31:12.977 ' 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:12.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.977 --rc genhtml_branch_coverage=1 00:31:12.977 --rc genhtml_function_coverage=1 00:31:12.977 --rc genhtml_legend=1 00:31:12.977 --rc geninfo_all_blocks=1 00:31:12.977 --rc geninfo_unexecuted_blocks=1 00:31:12.977 00:31:12.977 ' 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.977 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:12.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:12.978 05:47:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:19.550 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:19.550 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:19.550 Found net devices under 0000:af:00.0: cvl_0_0 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:19.550 Found net devices under 0000:af:00.1: cvl_0_1 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:19.550 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:19.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:19.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:31:19.551 00:31:19.551 --- 10.0.0.2 ping statistics --- 00:31:19.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.551 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:19.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:19.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:31:19.551 00:31:19.551 --- 10.0.0.1 ping statistics --- 00:31:19.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.551 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=470439 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 470439 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 470439 ']' 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.551 [2024-12-13 05:47:18.660109] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:31:19.551 [2024-12-13 05:47:18.660151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:19.551 [2024-12-13 05:47:18.737257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:19.551 [2024-12-13 05:47:18.759933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:19.551 [2024-12-13 05:47:18.759968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:19.551 [2024-12-13 05:47:18.759975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:19.551 [2024-12-13 05:47:18.759980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:19.551 [2024-12-13 05:47:18.759986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:19.551 [2024-12-13 05:47:18.761404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.551 [2024-12-13 05:47:18.761514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:19.551 [2024-12-13 05:47:18.761549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.551 [2024-12-13 05:47:18.761550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:19.551 05:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:19.551 [2024-12-13 05:47:19.017683] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.551 05:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:19.551 05:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:19.551 05:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.551 05:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:19.551 Malloc1 00:31:19.551 05:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:19.551 05:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:19.893 05:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:19.893 [2024-12-13 05:47:19.906812] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.152 05:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:20.152 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.153 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:20.425 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:20.425 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:20.425 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:20.425 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:20.425 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:20.425 05:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:20.683 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:20.683 fio-3.35 00:31:20.683 Starting 1 thread 00:31:23.209 00:31:23.209 test: (groupid=0, jobs=1): err= 0: pid=470907: Fri Dec 13 05:47:22 2024 00:31:23.209 read: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(92.7MiB/2005msec) 00:31:23.209 slat (nsec): min=1546, max=239082, avg=1746.15, stdev=2221.11 00:31:23.209 clat (usec): min=3106, max=10033, avg=5982.18, stdev=473.95 00:31:23.209 lat (usec): min=3143, max=10034, avg=5983.93, stdev=473.91 00:31:23.209 clat percentiles (usec): 00:31:23.209 | 1.00th=[ 4817], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 00:31:23.209 | 30.00th=[ 5800], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6128], 00:31:23.209 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:31:23.209 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 8586], 99.95th=[ 9241], 00:31:23.209 | 99.99th=[10028] 00:31:23.209 bw ( KiB/s): min=46752, max=47936, per=99.96%, avg=47340.00, stdev=512.35, samples=4 00:31:23.209 iops : min=11688, max=11984, avg=11835.00, stdev=128.09, samples=4 00:31:23.209 write: IOPS=11.8k, BW=46.0MiB/s (48.3MB/s)(92.3MiB/2005msec); 0 zone resets 00:31:23.209 slat (nsec): min=1586, max=225536, avg=1802.86, stdev=1647.24 00:31:23.209 clat (usec): min=2467, max=9240, avg=4831.53, stdev=394.63 00:31:23.209 lat (usec): min=2482, max=9242, avg=4833.33, stdev=394.70 00:31:23.209 clat percentiles (usec): 00:31:23.209 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:31:23.209 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:31:23.209 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:31:23.209 | 99.00th=[ 5669], 99.50th=[ 5997], 99.90th=[ 7963], 99.95th=[ 8717], 00:31:23.209 | 99.99th=[ 9241] 00:31:23.209 bw ( KiB/s): min=46792, max=47488, per=100.00%, avg=47138.00, stdev=321.42, samples=4 00:31:23.209 iops : min=11698, max=11872, avg=11784.50, stdev=80.36, samples=4 00:31:23.209 lat (msec) : 4=0.76%, 10=99.24%, 20=0.01% 00:31:23.209 cpu : usr=73.05%, sys=25.85%, ctx=96, majf=0, minf=3 00:31:23.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:23.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:23.209 issued rwts: total=23738,23628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:23.209 00:31:23.209 Run status group 0 (all jobs): 00:31:23.209 READ: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2005-2005msec 00:31:23.209 WRITE: bw=46.0MiB/s (48.3MB/s), 46.0MiB/s-46.0MiB/s (48.3MB/s-48.3MB/s), io=92.3MiB (96.8MB), run=2005-2005msec 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:23.209 05:47:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:23.467 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:23.467 fio-3.35 00:31:23.467 Starting 1 thread 00:31:24.401 [2024-12-13 05:47:24.096195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5da0 is same with the state(6) to be set 00:31:24.401 [2024-12-13 05:47:24.096248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5da0 is same with the state(6) to be set 00:31:25.772 00:31:25.772 test: (groupid=0, jobs=1): err= 0: pid=471427: Fri Dec 13 05:47:25 2024 00:31:25.772 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(339MiB/2008msec) 00:31:25.772 slat (nsec): min=2456, max=92080, avg=2823.84, stdev=1196.01 00:31:25.772 clat (usec): min=1970, max=49288, avg=6947.84, stdev=3339.94 00:31:25.772 lat (usec): min=1972, max=49290, avg=6950.66, stdev=3339.96 00:31:25.772 clat percentiles (usec): 00:31:25.772 | 1.00th=[ 3621], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5407], 00:31:25.772 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6783], 60.00th=[ 7177], 00:31:25.772 | 70.00th=[ 7504], 80.00th=[ 7898], 90.00th=[ 8717], 95.00th=[ 9372], 00:31:25.772 | 99.00th=[10945], 99.50th=[43254], 99.90th=[47973], 99.95th=[48497], 00:31:25.772 | 99.99th=[49021] 00:31:25.772 bw ( KiB/s): min=74432, max=97760, per=50.55%, avg=87360.00, stdev=9850.70, samples=4 00:31:25.772 iops : min= 4652, max= 6110, avg=5460.00, stdev=615.67, samples=4 00:31:25.772 write: IOPS=6502, BW=102MiB/s (107MB/s)(178MiB/1754msec); 0 zone resets 00:31:25.772 slat (usec): min=28, max=254, avg=31.72, stdev= 4.95 00:31:25.772 clat (usec): min=3078, max=14413, avg=8567.52, stdev=1490.75 00:31:25.772 lat (usec): min=3108, max=14443, avg=8599.24, stdev=1491.15 00:31:25.772 clat percentiles (usec): 00:31:25.772 | 1.00th=[ 5604], 5.00th=[ 6390], 10.00th=[ 6849], 20.00th=[ 7308], 00:31:25.772 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8717], 00:31:25.772 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11469], 00:31:25.772 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13698], 99.95th=[14091], 00:31:25.772 | 99.99th=[14353] 00:31:25.773 bw ( KiB/s): min=76512, max=101920, per=87.24%, avg=90760.00, stdev=10934.68, samples=4 00:31:25.773 iops : min= 4782, max= 6370, avg=5672.50, stdev=683.42, samples=4 00:31:25.773 lat (msec) : 2=0.01%, 4=1.75%, 10=90.76%, 20=7.10%, 50=0.38% 00:31:25.773 cpu : usr=85.75%, sys=13.50%, ctx=29, majf=0, minf=3 00:31:25.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:25.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:25.773 issued rwts: total=21688,11405,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:25.773 00:31:25.773 Run status group 0 (all jobs): 00:31:25.773 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=339MiB (355MB), run=2008-2008msec 00:31:25.773 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=178MiB (187MB), run=1754-1754msec 00:31:25.773 05:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:26.031 05:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:26.031 05:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:26.031 05:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:26.031 05:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:26.031 05:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:26.031 05:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:26.031 05:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:26.031 05:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:26.031 05:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:26.031 05:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:26.031 05:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:29.312 Nvme0n1 00:31:29.312 05:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:31.839 05:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=301c6915-25eb-45b5-af66-d6bb74a752d5 00:31:31.839 05:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 301c6915-25eb-45b5-af66-d6bb74a752d5 00:31:31.839 05:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=301c6915-25eb-45b5-af66-d6bb74a752d5 00:31:31.839 05:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:31.839 05:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:31.839 05:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:31.839 05:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:32.097 05:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:32.097 { 00:31:32.097 "uuid": "301c6915-25eb-45b5-af66-d6bb74a752d5", 00:31:32.097 "name": "lvs_0", 00:31:32.097 "base_bdev": "Nvme0n1", 00:31:32.097 "total_data_clusters": 930, 00:31:32.097 "free_clusters": 930, 00:31:32.097 "block_size": 512, 00:31:32.097 "cluster_size": 1073741824 00:31:32.097 } 00:31:32.097 ]' 00:31:32.097 05:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="301c6915-25eb-45b5-af66-d6bb74a752d5") .free_clusters' 00:31:32.097 05:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:32.097 05:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="301c6915-25eb-45b5-af66-d6bb74a752d5") .cluster_size' 00:31:32.097 05:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:32.097 05:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:32.097 05:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:32.097 952320 00:31:32.097 05:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:32.662 a3bb6bed-991f-4eae-9581-e9e47fecd0ba 00:31:32.662 05:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:32.662 05:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:32.921 05:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:33.178 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:33.178 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:33.179 05:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:33.437 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:33.437 fio-3.35 00:31:33.437 Starting 1 thread 00:31:35.964 00:31:35.964 test: (groupid=0, jobs=1): err= 0: pid=473191: Fri Dec 13 05:47:35 2024 00:31:35.964 read: IOPS=8083, BW=31.6MiB/s (33.1MB/s)(63.3MiB/2006msec) 00:31:35.964 slat (nsec): min=1532, max=85828, avg=1656.23, stdev=1017.57 00:31:35.964 clat (usec): min=888, max=169832, avg=8678.73, stdev=10250.45 00:31:35.964 lat (usec): min=890, max=169854, avg=8680.39, stdev=10250.61 00:31:35.964 clat percentiles (msec): 00:31:35.964 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:31:35.964 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:31:35.964 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:31:35.964 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 169], 00:31:35.964 | 99.99th=[ 169] 00:31:35.964 bw ( KiB/s): min=22880, max=35576, per=99.94%, avg=32312.00, stdev=6289.86, samples=4 00:31:35.964 iops : min= 5720, max= 8894, avg=8078.00, stdev=1572.47, samples=4 00:31:35.964 write: IOPS=8076, BW=31.5MiB/s (33.1MB/s)(63.3MiB/2006msec); 0 zone resets 00:31:35.964 slat (nsec): min=1562, max=83447, avg=1707.78, stdev=751.06 00:31:35.964 clat (usec): min=174, max=168413, avg=7053.50, stdev=9575.33 00:31:35.964 lat (usec): min=176, max=168418, avg=7055.21, stdev=9575.50 00:31:35.964 clat percentiles (msec): 00:31:35.964 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:31:35.964 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:31:35.964 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:31:35.964 | 99.00th=[ 8], 99.50th=[ 11], 99.90th=[ 169], 99.95th=[ 169], 00:31:35.964 | 99.99th=[ 169] 00:31:35.964 bw ( KiB/s): min=23784, max=35200, per=99.88%, avg=32266.00, stdev=5655.23, samples=4 00:31:35.964 iops : min= 5946, max= 8800, avg=8066.50, stdev=1413.81, samples=4 00:31:35.964 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:31:35.964 lat (msec) : 2=0.05%, 4=0.20%, 10=99.11%, 20=0.22%, 250=0.39% 00:31:35.964 cpu : usr=73.22%, sys=25.89%, ctx=111, majf=0, minf=3 00:31:35.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:35.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.964 issued rwts: total=16215,16201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.964 00:31:35.964 Run status group 0 (all jobs): 00:31:35.964 READ: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=63.3MiB (66.4MB), run=2006-2006msec 00:31:35.964 WRITE: bw=31.5MiB/s (33.1MB/s), 31.5MiB/s-31.5MiB/s (33.1MB/s-33.1MB/s), io=63.3MiB (66.4MB), run=2006-2006msec 00:31:35.964 05:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:35.964 05:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:37.337 05:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=6c14bbf6-a78c-47be-ae60-c596d99b4c0b 00:31:37.337 05:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 6c14bbf6-a78c-47be-ae60-c596d99b4c0b 00:31:37.337 05:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=6c14bbf6-a78c-47be-ae60-c596d99b4c0b 00:31:37.337 05:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:37.337 05:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:37.337 05:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:37.337 05:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:37.337 05:47:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:37.337 { 00:31:37.337 "uuid": "301c6915-25eb-45b5-af66-d6bb74a752d5", 00:31:37.337 "name": "lvs_0", 00:31:37.337 "base_bdev": "Nvme0n1", 00:31:37.337 "total_data_clusters": 930, 00:31:37.337 "free_clusters": 0, 00:31:37.337 "block_size": 512, 00:31:37.337 "cluster_size": 1073741824 00:31:37.337 }, 00:31:37.337 { 00:31:37.337 "uuid": "6c14bbf6-a78c-47be-ae60-c596d99b4c0b", 00:31:37.337 "name": "lvs_n_0", 00:31:37.337 "base_bdev": "a3bb6bed-991f-4eae-9581-e9e47fecd0ba", 00:31:37.337 "total_data_clusters": 237847, 00:31:37.337 "free_clusters": 237847, 00:31:37.337 "block_size": 512, 00:31:37.337 "cluster_size": 4194304 00:31:37.337 } 00:31:37.337 ]' 00:31:37.337 05:47:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="6c14bbf6-a78c-47be-ae60-c596d99b4c0b") .free_clusters' 00:31:37.337 05:47:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:37.337 05:47:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="6c14bbf6-a78c-47be-ae60-c596d99b4c0b") .cluster_size' 00:31:37.337 05:47:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:37.337 05:47:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:37.337 05:47:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:37.337 951388 00:31:37.337 05:47:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:37.902 f55ebc2e-30aa-4f45-899f-05161554f355 00:31:37.902 05:47:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:38.160 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:38.418 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:38.418 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:38.418 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:38.418 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:38.418 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:38.418 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:38.418 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:38.418 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:38.418 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:38.418 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:38.418 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:38.418 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:38.418 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:38.685 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:38.685 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:38.685 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:38.685 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:38.685 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:38.685 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:38.685 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:38.685 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:38.685 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:38.685 05:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:38.942 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:38.942 fio-3.35 00:31:38.942 Starting 1 thread 00:31:41.463 00:31:41.463 test: (groupid=0, jobs=1): err= 0: pid=474089: Fri Dec 13 05:47:41 2024 00:31:41.463 read: IOPS=7784, BW=30.4MiB/s (31.9MB/s)(61.0MiB/2007msec) 00:31:41.463 slat (nsec): min=1530, max=99955, avg=1689.29, stdev=1177.00 00:31:41.463 clat (usec): min=3613, max=13496, avg=9001.71, stdev=795.61 00:31:41.463 lat (usec): min=3633, max=13498, avg=9003.40, stdev=795.59 00:31:41.463 clat percentiles (usec): 00:31:41.463 | 1.00th=[ 7111], 5.00th=[ 7701], 10.00th=[ 8029], 20.00th=[ 8356], 00:31:41.463 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:31:41.463 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10290], 00:31:41.463 | 99.00th=[10814], 99.50th=[11207], 99.90th=[12518], 99.95th=[13304], 00:31:41.463 | 99.99th=[13435] 00:31:41.463 bw ( KiB/s): min=29520, max=32032, per=99.91%, avg=31110.00, stdev=1098.21, samples=4 00:31:41.463 iops : min= 7380, max= 8008, avg=7777.50, stdev=274.55, samples=4 00:31:41.463 write: IOPS=7768, BW=30.3MiB/s (31.8MB/s)(60.9MiB/2007msec); 0 zone resets 00:31:41.463 slat (nsec): min=1557, max=85971, avg=1762.65, stdev=831.33 00:31:41.463 clat (usec): min=2559, max=13303, avg=7328.41, stdev=660.79 00:31:41.463 lat (usec): min=2565, max=13305, avg=7330.18, stdev=660.79 00:31:41.463 clat percentiles (usec): 00:31:41.463 | 1.00th=[ 5866], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6849], 00:31:41.463 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:31:41.464 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8356], 00:31:41.464 | 99.00th=[ 8848], 99.50th=[ 9241], 99.90th=[11469], 99.95th=[13042], 00:31:41.464 | 99.99th=[13304] 00:31:41.464 bw ( KiB/s): min=30752, max=31408, per=99.97%, avg=31066.00, stdev=271.64, samples=4 00:31:41.464 iops : min= 7688, max= 7852, avg=7766.50, stdev=67.91, samples=4 00:31:41.464 lat (msec) : 4=0.05%, 10=95.09%, 20=4.87% 00:31:41.464 cpu : usr=71.39%, sys=27.77%, ctx=103, majf=0, minf=3 00:31:41.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:41.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:41.464 issued rwts: total=15623,15592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:41.464 00:31:41.464 Run status group 0 (all jobs): 00:31:41.464 READ: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=61.0MiB (64.0MB), run=2007-2007msec 00:31:41.464 WRITE: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=60.9MiB (63.9MB), run=2007-2007msec 00:31:41.464 05:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:41.464 05:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:41.464 05:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:45.633 05:47:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:45.633 05:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:48.151 05:47:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:48.407 05:47:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:50.299 rmmod nvme_tcp 00:31:50.299 rmmod nvme_fabrics 00:31:50.299 rmmod nvme_keyring 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 470439 ']' 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 470439 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 470439 ']' 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 470439 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 470439 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 470439' 00:31:50.299 killing process with pid 470439 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 470439 00:31:50.299 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 470439 00:31:50.557 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:50.557 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:50.557 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:50.557 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:50.557 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:31:50.557 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:50.557 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:50.557 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:50.557 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:50.557 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.557 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.557 05:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.463 05:47:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:52.463 00:31:52.463 real 0m39.989s 00:31:52.463 user 2m39.981s 00:31:52.463 sys 0m8.805s 00:31:52.463 05:47:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.463 05:47:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.463 ************************************ 00:31:52.463 END TEST nvmf_fio_host 00:31:52.463 ************************************ 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.722 ************************************ 00:31:52.722 START TEST nvmf_failover 00:31:52.722 ************************************ 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:52.722 * Looking for test storage... 00:31:52.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:52.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.722 --rc genhtml_branch_coverage=1 00:31:52.722 --rc genhtml_function_coverage=1 00:31:52.722 --rc genhtml_legend=1 00:31:52.722 --rc geninfo_all_blocks=1 00:31:52.722 --rc geninfo_unexecuted_blocks=1 00:31:52.722 00:31:52.722 ' 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:52.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.722 --rc genhtml_branch_coverage=1 00:31:52.722 --rc genhtml_function_coverage=1 00:31:52.722 --rc genhtml_legend=1 00:31:52.722 --rc geninfo_all_blocks=1 00:31:52.722 --rc geninfo_unexecuted_blocks=1 00:31:52.722 00:31:52.722 ' 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:52.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.722 --rc genhtml_branch_coverage=1 00:31:52.722 --rc genhtml_function_coverage=1 00:31:52.722 --rc genhtml_legend=1 00:31:52.722 --rc geninfo_all_blocks=1 00:31:52.722 --rc geninfo_unexecuted_blocks=1 00:31:52.722 00:31:52.722 ' 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:52.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.722 --rc genhtml_branch_coverage=1 00:31:52.722 --rc genhtml_function_coverage=1 00:31:52.722 --rc genhtml_legend=1 00:31:52.722 --rc geninfo_all_blocks=1 00:31:52.722 --rc geninfo_unexecuted_blocks=1 00:31:52.722 00:31:52.722 ' 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:52.722 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:52.723 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.723 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.723 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:52.723 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.723 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:52.723 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:52.723 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.723 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.723 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.723 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.723 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.723 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:52.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:52.982 05:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.550 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:59.551 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:59.551 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:59.551 Found net devices under 0000:af:00.0: cvl_0_0 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:59.551 Found net devices under 0000:af:00.1: cvl_0_1 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:31:59.551 00:31:59.551 --- 10.0.0.2 ping statistics --- 00:31:59.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.551 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:31:59.551 00:31:59.551 --- 10.0.0.1 ping statistics --- 00:31:59.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.551 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=479325 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 479325 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 479325 ']' 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.551 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:59.551 [2024-12-13 05:47:58.667552] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:31:59.551 [2024-12-13 05:47:58.667595] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.551 [2024-12-13 05:47:58.730923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:59.551 [2024-12-13 05:47:58.753692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.551 [2024-12-13 05:47:58.753725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.552 [2024-12-13 05:47:58.753732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.552 [2024-12-13 05:47:58.753738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.552 [2024-12-13 05:47:58.753743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.552 [2024-12-13 05:47:58.754897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.552 [2024-12-13 05:47:58.755001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.552 [2024-12-13 05:47:58.755003] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.552 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:59.552 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:59.552 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:59.552 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:59.552 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:59.552 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.552 05:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:59.552 [2024-12-13 05:47:59.070646] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.552 05:47:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:59.552 Malloc0 00:31:59.552 05:47:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:59.552 05:47:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:59.808 05:47:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:00.065 [2024-12-13 05:47:59.886418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.065 05:47:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:00.322 [2024-12-13 05:48:00.091041] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:00.322 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:00.322 [2024-12-13 05:48:00.299719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:00.322 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=479628 00:32:00.322 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:00.322 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:00.322 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 479628 /var/tmp/bdevperf.sock 00:32:00.322 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 479628 ']' 00:32:00.322 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:00.322 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.322 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:00.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:00.322 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.322 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:00.579 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.579 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:00.579 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:01.142 NVMe0n1 00:32:01.142 05:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:01.399 00:32:01.399 05:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:01.399 05:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=479867 00:32:01.399 05:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:02.328 05:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:02.585 [2024-12-13 05:48:02.371307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 [2024-12-13 05:48:02.371351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 [2024-12-13 05:48:02.371359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 [2024-12-13 05:48:02.371365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 [2024-12-13 05:48:02.371371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 [2024-12-13 05:48:02.371378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 [2024-12-13 05:48:02.371384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 [2024-12-13 05:48:02.371389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 [2024-12-13 05:48:02.371395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 [2024-12-13 05:48:02.371401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 [2024-12-13 05:48:02.371407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 [2024-12-13 05:48:02.371412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 [2024-12-13 05:48:02.371418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 [2024-12-13 05:48:02.371429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86aa0 is same with the state(6) to be set 00:32:02.585 05:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:05.855 05:48:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:05.855 00:32:05.856 05:48:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:06.112 [2024-12-13 05:48:06.065845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.065997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.066002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.066013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.066019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.066024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 [2024-12-13 05:48:06.066030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87fe0 is same with the state(6) to be set 00:32:06.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:09.385 05:48:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.385 [2024-12-13 05:48:09.284337] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.385 05:48:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:10.315 05:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:10.572 [2024-12-13 05:48:10.520508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.572 [2024-12-13 05:48:10.520692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 [2024-12-13 05:48:10.520901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88ea0 is same with the state(6) to be set 00:32:10.573 05:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 479867 00:32:17.131 { 00:32:17.131 "results": [ 00:32:17.131 { 00:32:17.131 "job": "NVMe0n1", 00:32:17.131 "core_mask": "0x1", 00:32:17.131 "workload": "verify", 00:32:17.131 "status": "finished", 00:32:17.131 "verify_range": { 00:32:17.131 "start": 0, 00:32:17.131 "length": 16384 00:32:17.131 }, 00:32:17.131 "queue_depth": 128, 00:32:17.131 "io_size": 4096, 00:32:17.131 "runtime": 15.007792, 00:32:17.131 "iops": 11216.306835809026, 00:32:17.131 "mibps": 43.81369857737901, 00:32:17.131 "io_failed": 12509, 00:32:17.131 "io_timeout": 0, 00:32:17.131 "avg_latency_us": 10600.862001489864, 00:32:17.131 "min_latency_us": 421.30285714285714, 00:32:17.131 "max_latency_us": 28586.179047619047 00:32:17.131 } 00:32:17.131 ], 00:32:17.131 "core_count": 1 00:32:17.131 } 00:32:17.131 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 479628 00:32:17.131 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 479628 ']' 00:32:17.131 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 479628 00:32:17.131 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:17.131 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:17.131 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479628 00:32:17.131 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:17.131 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:17.131 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479628' 00:32:17.131 killing process with pid 479628 00:32:17.131 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 479628 00:32:17.131 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 479628 00:32:17.131 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:17.131 [2024-12-13 05:48:00.374492] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:17.131 [2024-12-13 05:48:00.374548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479628 ] 00:32:17.131 [2024-12-13 05:48:00.452107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.131 [2024-12-13 05:48:00.474779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.131 Running I/O for 15 seconds... 00:32:17.131 11489.00 IOPS, 44.88 MiB/s [2024-12-13T04:48:17.146Z] [2024-12-13 05:48:02.371648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.131 [2024-12-13 05:48:02.371979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.131 [2024-12-13 05:48:02.371985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.371993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.132 [2024-12-13 05:48:02.371999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.132 [2024-12-13 05:48:02.372013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.132 [2024-12-13 05:48:02.372032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.132 [2024-12-13 05:48:02.372046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.132 [2024-12-13 05:48:02.372060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.132 [2024-12-13 05:48:02.372074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.132 [2024-12-13 05:48:02.372088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.132 [2024-12-13 05:48:02.372102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.132 [2024-12-13 05:48:02.372116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.132 [2024-12-13 05:48:02.372130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.132 [2024-12-13 05:48:02.372265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.132 [2024-12-13 05:48:02.372526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.132 [2024-12-13 05:48:02.372583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.132 [2024-12-13 05:48:02.372592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.372989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.372997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.373003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.373011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.373017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.373025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.373031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.373039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.373045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.373052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.373059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.373067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.373073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.373081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.373087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.373095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.373101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.373109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.373116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.373123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.373130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.373140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.373146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.373154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.373160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.133 [2024-12-13 05:48:02.373168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.133 [2024-12-13 05:48:02.373174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:02.373343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:02.373357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:02.373373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:02.373386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:02.373400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:02.373414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:02.373428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.134 [2024-12-13 05:48:02.373532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10958b0 is same with the state(6) to be set 00:32:17.134 [2024-12-13 05:48:02.373549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.134 [2024-12-13 05:48:02.373554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.134 [2024-12-13 05:48:02.373560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101880 len:8 PRP1 0x0 PRP2 0x0 00:32:17.134 [2024-12-13 05:48:02.373566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373609] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:17.134 [2024-12-13 05:48:02.373632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.134 [2024-12-13 05:48:02.373640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.134 [2024-12-13 05:48:02.373655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.134 [2024-12-13 05:48:02.373668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.134 [2024-12-13 05:48:02.373681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:02.373688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:17.134 [2024-12-13 05:48:02.376491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:17.134 [2024-12-13 05:48:02.376520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10703a0 (9): Bad file descriptor 00:32:17.134 [2024-12-13 05:48:02.407844] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:17.134 11245.00 IOPS, 43.93 MiB/s [2024-12-13T04:48:17.149Z] 11327.33 IOPS, 44.25 MiB/s [2024-12-13T04:48:17.149Z] 11356.25 IOPS, 44.36 MiB/s [2024-12-13T04:48:17.149Z] [2024-12-13 05:48:06.067560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:06.067593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:06.067612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:06.067620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:06.067629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:06.067636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:06.067644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:06.067650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:06.067658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:06.067664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:06.067672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:06.067678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.134 [2024-12-13 05:48:06.067686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.134 [2024-12-13 05:48:06.067692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.067989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.067995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.068009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.068024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.068037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.068051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.068065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.068079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.068093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.068107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.068121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.068134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.068148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.068164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.135 [2024-12-13 05:48:06.068177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.135 [2024-12-13 05:48:06.068185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.136 [2024-12-13 05:48:06.068444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.136 [2024-12-13 05:48:06.068466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.136 [2024-12-13 05:48:06.068523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.136 [2024-12-13 05:48:06.068548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56064 len:8 PRP1 0x0 PRP2 0x0 00:32:17.136 [2024-12-13 05:48:06.068555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.136 [2024-12-13 05:48:06.068590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.136 [2024-12-13 05:48:06.068604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.136 [2024-12-13 05:48:06.068617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.136 [2024-12-13 05:48:06.068630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10703a0 is same with the state(6) to be set 00:32:17.136 [2024-12-13 05:48:06.068759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.136 [2024-12-13 05:48:06.068766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.136 [2024-12-13 05:48:06.068772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56072 len:8 PRP1 0x0 PRP2 0x0 00:32:17.136 [2024-12-13 05:48:06.068778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.136 [2024-12-13 05:48:06.068791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.136 [2024-12-13 05:48:06.068796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56080 len:8 PRP1 0x0 PRP2 0x0 00:32:17.136 [2024-12-13 05:48:06.068803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.136 [2024-12-13 05:48:06.068814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.136 [2024-12-13 05:48:06.068820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56088 len:8 PRP1 0x0 PRP2 0x0 00:32:17.136 [2024-12-13 05:48:06.068826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.136 [2024-12-13 05:48:06.068837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.136 [2024-12-13 05:48:06.068842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56096 len:8 PRP1 0x0 PRP2 0x0 00:32:17.136 [2024-12-13 05:48:06.068850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.136 [2024-12-13 05:48:06.068862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.136 [2024-12-13 05:48:06.068867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56104 len:8 PRP1 0x0 PRP2 0x0 00:32:17.136 [2024-12-13 05:48:06.068874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.136 [2024-12-13 05:48:06.068886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.136 [2024-12-13 05:48:06.068891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56112 len:8 PRP1 0x0 PRP2 0x0 00:32:17.136 [2024-12-13 05:48:06.068897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.136 [2024-12-13 05:48:06.068904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.136 [2024-12-13 05:48:06.068909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.136 [2024-12-13 05:48:06.068914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56120 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.068920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.068927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.068931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.068937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56128 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.068942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.068949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.068953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.068959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56136 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.068965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.068971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.068976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.068981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56144 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.068987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.068994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.068999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56152 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56160 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56168 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56176 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56184 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56192 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56200 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56208 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56216 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56224 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56232 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56240 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56248 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56256 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56264 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56272 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56280 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56288 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55296 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55304 len:8 PRP1 0x0 PRP2 0x0 00:32:17.137 [2024-12-13 05:48:06.069462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.137 [2024-12-13 05:48:06.069468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.137 [2024-12-13 05:48:06.069473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.137 [2024-12-13 05:48:06.069478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55312 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.069484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.069491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.069495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.069501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55320 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.069507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.069514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.069520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.069525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55328 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.069531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.069538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.069543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.069548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55336 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.069555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.069561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.069566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.069572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55344 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.069579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.069586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.069590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.069596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55352 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.069602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.069613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.069618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.069624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55360 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.069630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.069637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.069641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.069647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55368 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.069653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.069659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.069664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.069669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55376 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.069676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.069682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.069687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.069694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55384 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.069700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.069707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.069712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.069717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55392 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.069724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.069731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.069735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.069740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55400 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.069746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.079735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.079751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.079760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55408 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.079768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.079777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.079783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.079791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56296 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.079799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.079809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.079815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.079822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55416 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.079831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.079839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.079846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.079852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55424 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.079861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.079869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.079876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.079882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55432 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.079891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.079900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.079906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.079914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55440 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.079923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.079932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.079938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.079945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55448 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.079953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.079962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.079969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.079975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55456 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.079984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.079994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.080001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.080008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55464 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.080018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.080028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.080034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.080041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55472 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.080051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.080060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.080066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.080073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55480 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.080083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.080092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.080099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.080105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55488 len:8 PRP1 0x0 PRP2 0x0 00:32:17.138 [2024-12-13 05:48:06.080113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.138 [2024-12-13 05:48:06.080124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.138 [2024-12-13 05:48:06.080131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.138 [2024-12-13 05:48:06.080138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55496 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55504 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55512 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55520 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55528 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55536 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55544 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55552 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55560 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55568 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55576 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55584 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55592 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55600 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55608 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55616 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55624 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55632 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55640 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55648 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55656 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55664 len:8 PRP1 0x0 PRP2 0x0 00:32:17.139 [2024-12-13 05:48:06.080836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.139 [2024-12-13 05:48:06.080845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.139 [2024-12-13 05:48:06.080851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.139 [2024-12-13 05:48:06.080858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55672 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.080867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.080875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.080882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.080889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55680 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.080897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.080906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.080912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.080919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55688 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.080927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.080936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.080942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.080950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55696 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.080958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.080966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.080973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.080980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55704 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.080988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.080998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55712 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55720 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55728 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55736 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55744 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55752 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55760 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55768 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55776 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55784 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55792 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55800 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55808 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55816 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55824 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55832 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55840 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55848 len:8 PRP1 0x0 PRP2 0x0 00:32:17.140 [2024-12-13 05:48:06.081550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.140 [2024-12-13 05:48:06.081559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.140 [2024-12-13 05:48:06.081565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.140 [2024-12-13 05:48:06.081572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55856 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.081581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.081589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.081596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.081603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55864 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.081612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.081620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.081627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.081634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55872 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.081642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.081651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.081658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.081665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55880 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.081675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.081684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.081691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.081698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55888 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.081706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.081716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.081723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.081730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55896 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.081738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.081746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.081753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.081760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55904 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.081768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.081776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.081783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.081790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55912 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.081798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.081809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.081815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.081823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55920 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.081831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.081840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.081846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.081853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55928 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.081861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.081870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.081876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.081883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55936 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.081892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.081900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.081906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.081913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55944 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.081923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.088469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.088484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.088496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55952 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.088510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.088523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.088531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.088541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55960 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.088553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.088566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.088575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.088584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55968 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.088596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.088608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.088617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.088626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55976 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.088638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.088651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.088660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.088670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55984 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.088681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.088693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.088702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.088712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55992 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.088722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.088735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.088744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.088753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56000 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.088765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.088777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.088786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.088795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56008 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.088808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.088819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.088828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.088840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56016 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.088852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.088864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.088873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.088882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56024 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.088894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.088906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.088915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.088924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55280 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.088936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.088948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.088957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.088966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55288 len:8 PRP1 0x0 PRP2 0x0 00:32:17.141 [2024-12-13 05:48:06.088978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.141 [2024-12-13 05:48:06.088990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.141 [2024-12-13 05:48:06.088999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.141 [2024-12-13 05:48:06.089009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56032 len:8 PRP1 0x0 PRP2 0x0 00:32:17.142 [2024-12-13 05:48:06.089020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:06.089032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.142 [2024-12-13 05:48:06.089041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.142 [2024-12-13 05:48:06.089051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56040 len:8 PRP1 0x0 PRP2 0x0 00:32:17.142 [2024-12-13 05:48:06.089062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:06.089074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.142 [2024-12-13 05:48:06.089082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.142 [2024-12-13 05:48:06.089092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56048 len:8 PRP1 0x0 PRP2 0x0 00:32:17.142 [2024-12-13 05:48:06.089103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:06.089115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.142 [2024-12-13 05:48:06.089124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.142 [2024-12-13 05:48:06.089134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56056 len:8 PRP1 0x0 PRP2 0x0 00:32:17.142 [2024-12-13 05:48:06.089145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:06.089158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.142 [2024-12-13 05:48:06.089169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.142 [2024-12-13 05:48:06.089178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56064 len:8 PRP1 0x0 PRP2 0x0 00:32:17.142 [2024-12-13 05:48:06.089190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:06.089245] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:17.142 [2024-12-13 05:48:06.089260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:17.142 [2024-12-13 05:48:06.089308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10703a0 (9): Bad file descriptor 00:32:17.142 [2024-12-13 05:48:06.094517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:17.142 [2024-12-13 05:48:06.164086] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:17.142 11134.80 IOPS, 43.50 MiB/s [2024-12-13T04:48:17.157Z] 11193.33 IOPS, 43.72 MiB/s [2024-12-13T04:48:17.157Z] 11226.29 IOPS, 43.85 MiB/s [2024-12-13T04:48:17.157Z] 11257.38 IOPS, 43.97 MiB/s [2024-12-13T04:48:17.157Z] 11268.11 IOPS, 44.02 MiB/s [2024-12-13T04:48:17.157Z] [2024-12-13 05:48:10.521222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.142 [2024-12-13 05:48:10.521703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.142 [2024-12-13 05:48:10.521710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.521990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.521998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.143 [2024-12-13 05:48:10.522187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.143 [2024-12-13 05:48:10.522202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.143 [2024-12-13 05:48:10.522216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.143 [2024-12-13 05:48:10.522230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.143 [2024-12-13 05:48:10.522244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.143 [2024-12-13 05:48:10.522258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.143 [2024-12-13 05:48:10.522272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.143 [2024-12-13 05:48:10.522286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.143 [2024-12-13 05:48:10.522294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.144 [2024-12-13 05:48:10.522827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.144 [2024-12-13 05:48:10.522833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.522841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.145 [2024-12-13 05:48:10.522847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.522855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.145 [2024-12-13 05:48:10.522862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.522870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.145 [2024-12-13 05:48:10.522876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.522884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.145 [2024-12-13 05:48:10.522891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.522900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.145 [2024-12-13 05:48:10.522906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.522914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.145 [2024-12-13 05:48:10.522920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.522928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.145 [2024-12-13 05:48:10.522934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.522942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.145 [2024-12-13 05:48:10.522949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.522956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.145 [2024-12-13 05:48:10.522962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.522970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.145 [2024-12-13 05:48:10.522976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.522984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.145 [2024-12-13 05:48:10.522990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.522998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.145 [2024-12-13 05:48:10.523004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.523012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.145 [2024-12-13 05:48:10.523018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.523026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.145 [2024-12-13 05:48:10.523033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.523042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.145 [2024-12-13 05:48:10.523049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.523057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.145 [2024-12-13 05:48:10.523063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.523071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.145 [2024-12-13 05:48:10.523077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.523085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.145 [2024-12-13 05:48:10.523091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.523099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.145 [2024-12-13 05:48:10.523107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.523141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.145 [2024-12-13 05:48:10.523147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.145 [2024-12-13 05:48:10.523154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86128 len:8 PRP1 0x0 PRP2 0x0 00:32:17.145 [2024-12-13 05:48:10.523162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.523206] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:17.145 [2024-12-13 05:48:10.523227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.145 [2024-12-13 05:48:10.523234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.523242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.145 [2024-12-13 05:48:10.523248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.523255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.145 [2024-12-13 05:48:10.523261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.523269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.145 [2024-12-13 05:48:10.523275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.145 [2024-12-13 05:48:10.523281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:17.145 [2024-12-13 05:48:10.526072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:17.145 [2024-12-13 05:48:10.526103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10703a0 (9): Bad file descriptor 00:32:17.145 [2024-12-13 05:48:10.680405] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:17.145 11106.60 IOPS, 43.39 MiB/s [2024-12-13T04:48:17.160Z] 11132.73 IOPS, 43.49 MiB/s [2024-12-13T04:48:17.160Z] 11167.67 IOPS, 43.62 MiB/s [2024-12-13T04:48:17.160Z] 11172.23 IOPS, 43.64 MiB/s [2024-12-13T04:48:17.160Z] 11196.00 IOPS, 43.73 MiB/s [2024-12-13T04:48:17.160Z] 11214.07 IOPS, 43.80 MiB/s 00:32:17.145 Latency(us) 00:32:17.145 [2024-12-13T04:48:17.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.145 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:17.145 Verification LBA range: start 0x0 length 0x4000 00:32:17.145 NVMe0n1 : 15.01 11216.31 43.81 833.50 0.00 10600.86 421.30 28586.18 00:32:17.145 [2024-12-13T04:48:17.160Z] =================================================================================================================== 00:32:17.145 [2024-12-13T04:48:17.160Z] Total : 11216.31 43.81 833.50 0.00 10600.86 421.30 28586.18 00:32:17.145 Received shutdown signal, test time was about 15.000000 seconds 00:32:17.145 00:32:17.145 Latency(us) 00:32:17.145 [2024-12-13T04:48:17.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.145 [2024-12-13T04:48:17.160Z] =================================================================================================================== 00:32:17.145 [2024-12-13T04:48:17.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=482764 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 482764 /var/tmp/bdevperf.sock 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 482764 ']' 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:17.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:17.145 [2024-12-13 05:48:16.949196] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:17.145 05:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:17.403 [2024-12-13 05:48:17.141703] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:17.403 05:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:17.660 NVMe0n1 00:32:17.660 05:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:17.917 00:32:17.917 05:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:18.174 00:32:18.174 05:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:18.174 05:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:18.431 05:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:18.688 05:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:21.960 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:21.960 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:21.960 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=483661 00:32:21.960 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:21.960 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 483661 00:32:22.891 { 00:32:22.891 "results": [ 00:32:22.891 { 00:32:22.891 "job": "NVMe0n1", 00:32:22.891 "core_mask": "0x1", 00:32:22.891 "workload": "verify", 00:32:22.891 "status": "finished", 00:32:22.891 "verify_range": { 00:32:22.891 "start": 0, 00:32:22.891 "length": 16384 00:32:22.891 }, 00:32:22.891 "queue_depth": 128, 00:32:22.891 "io_size": 4096, 00:32:22.891 "runtime": 1.009885, 00:32:22.891 "iops": 11342.875673962877, 00:32:22.891 "mibps": 44.30810810141749, 00:32:22.891 "io_failed": 0, 00:32:22.891 "io_timeout": 0, 00:32:22.891 "avg_latency_us": 11242.633303818253, 00:32:22.891 "min_latency_us": 2465.401904761905, 00:32:22.891 "max_latency_us": 13793.76761904762 00:32:22.891 } 00:32:22.891 ], 00:32:22.891 "core_count": 1 00:32:22.891 } 00:32:22.891 05:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:22.891 [2024-12-13 05:48:16.601751] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:22.891 [2024-12-13 05:48:16.601809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482764 ] 00:32:22.891 [2024-12-13 05:48:16.672494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.891 [2024-12-13 05:48:16.692119] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.891 [2024-12-13 05:48:18.494418] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:22.891 [2024-12-13 05:48:18.494466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:22.891 [2024-12-13 05:48:18.494477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.891 [2024-12-13 05:48:18.494485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:22.891 [2024-12-13 05:48:18.494493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.891 [2024-12-13 05:48:18.494500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:22.891 [2024-12-13 05:48:18.494507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.891 [2024-12-13 05:48:18.494514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:22.891 [2024-12-13 05:48:18.494520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.891 [2024-12-13 05:48:18.494527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:22.891 [2024-12-13 05:48:18.494553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:22.891 [2024-12-13 05:48:18.494567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ce3a0 (9): Bad file descriptor 00:32:22.891 [2024-12-13 05:48:18.503221] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:22.891 Running I/O for 1 seconds... 00:32:22.891 11327.00 IOPS, 44.25 MiB/s 00:32:22.891 Latency(us) 00:32:22.891 [2024-12-13T04:48:22.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.891 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:22.891 Verification LBA range: start 0x0 length 0x4000 00:32:22.891 NVMe0n1 : 1.01 11342.88 44.31 0.00 0.00 11242.63 2465.40 13793.77 00:32:22.891 [2024-12-13T04:48:22.906Z] =================================================================================================================== 00:32:22.891 [2024-12-13T04:48:22.906Z] Total : 11342.88 44.31 0.00 0.00 11242.63 2465.40 13793.77 00:32:22.891 05:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:22.891 05:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:23.148 05:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:23.404 05:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:23.404 05:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:23.661 05:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:23.917 05:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:27.190 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:27.190 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:27.190 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 482764 00:32:27.190 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 482764 ']' 00:32:27.190 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 482764 00:32:27.190 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:27.190 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.190 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482764 00:32:27.190 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:27.190 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:27.190 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482764' 00:32:27.190 killing process with pid 482764 00:32:27.190 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 482764 00:32:27.190 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 482764 00:32:27.190 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:27.190 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:27.448 rmmod nvme_tcp 00:32:27.448 rmmod nvme_fabrics 00:32:27.448 rmmod nvme_keyring 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 479325 ']' 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 479325 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 479325 ']' 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 479325 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479325 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479325' 00:32:27.448 killing process with pid 479325 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 479325 00:32:27.448 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 479325 00:32:27.708 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:27.708 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:27.708 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:27.708 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:27.708 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:27.708 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:27.708 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:27.708 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:27.708 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:27.708 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.708 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.708 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:30.250 00:32:30.250 real 0m37.152s 00:32:30.250 user 1m57.735s 00:32:30.250 sys 0m7.866s 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:30.250 ************************************ 00:32:30.250 END TEST nvmf_failover 00:32:30.250 ************************************ 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.250 ************************************ 00:32:30.250 START TEST nvmf_host_discovery 00:32:30.250 ************************************ 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:30.250 * Looking for test storage... 00:32:30.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.250 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:30.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.251 --rc genhtml_branch_coverage=1 00:32:30.251 --rc genhtml_function_coverage=1 00:32:30.251 --rc genhtml_legend=1 00:32:30.251 --rc geninfo_all_blocks=1 00:32:30.251 --rc geninfo_unexecuted_blocks=1 00:32:30.251 00:32:30.251 ' 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:30.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.251 --rc genhtml_branch_coverage=1 00:32:30.251 --rc genhtml_function_coverage=1 00:32:30.251 --rc genhtml_legend=1 00:32:30.251 --rc geninfo_all_blocks=1 00:32:30.251 --rc geninfo_unexecuted_blocks=1 00:32:30.251 00:32:30.251 ' 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:30.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.251 --rc genhtml_branch_coverage=1 00:32:30.251 --rc genhtml_function_coverage=1 00:32:30.251 --rc genhtml_legend=1 00:32:30.251 --rc geninfo_all_blocks=1 00:32:30.251 --rc geninfo_unexecuted_blocks=1 00:32:30.251 00:32:30.251 ' 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:30.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.251 --rc genhtml_branch_coverage=1 00:32:30.251 --rc genhtml_function_coverage=1 00:32:30.251 --rc genhtml_legend=1 00:32:30.251 --rc geninfo_all_blocks=1 00:32:30.251 --rc geninfo_unexecuted_blocks=1 00:32:30.251 00:32:30.251 ' 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:30.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.251 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.252 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.252 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:30.252 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:30.252 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:30.252 05:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:35.535 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:35.535 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:35.535 Found net devices under 0000:af:00.0: cvl_0_0 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.535 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:35.535 Found net devices under 0000:af:00.1: cvl_0_1 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:35.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:35.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:32:35.795 00:32:35.795 --- 10.0.0.2 ping statistics --- 00:32:35.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.795 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:32:35.795 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:35.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:35.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:32:35.796 00:32:35.796 --- 10.0.0.1 ping statistics --- 00:32:35.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.796 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:32:35.796 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:35.796 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:35.796 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:35.796 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:35.796 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:35.796 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:35.796 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:35.796 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:35.796 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:36.054 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:36.054 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:36.054 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:36.054 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.054 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=487931 00:32:36.054 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:36.054 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 487931 00:32:36.054 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 487931 ']' 00:32:36.054 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.054 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:36.054 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.054 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:36.054 05:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.054 [2024-12-13 05:48:35.884833] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:36.054 [2024-12-13 05:48:35.884876] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.054 [2024-12-13 05:48:35.947811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.055 [2024-12-13 05:48:35.969143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.055 [2024-12-13 05:48:35.969181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.055 [2024-12-13 05:48:35.969189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.055 [2024-12-13 05:48:35.969200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.055 [2024-12-13 05:48:35.969206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.055 [2024-12-13 05:48:35.969719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.055 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.055 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:36.055 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:36.055 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:36.055 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.314 [2024-12-13 05:48:36.108191] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.314 [2024-12-13 05:48:36.120362] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.314 null0 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.314 null1 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=488043 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 488043 /tmp/host.sock 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 488043 ']' 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:36.314 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:36.314 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.314 [2024-12-13 05:48:36.201908] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:36.314 [2024-12-13 05:48:36.201949] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488043 ] 00:32:36.314 [2024-12-13 05:48:36.275271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.314 [2024-12-13 05:48:36.298340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.585 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.586 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.586 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.586 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.586 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:36.586 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:36.586 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.586 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.586 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.586 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.586 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.586 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.586 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.846 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.847 [2024-12-13 05:48:36.713886] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.847 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:37.106 05:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:37.674 [2024-12-13 05:48:37.414439] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:37.674 [2024-12-13 05:48:37.414464] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:37.674 [2024-12-13 05:48:37.414475] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:37.674 [2024-12-13 05:48:37.541946] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:37.674 [2024-12-13 05:48:37.604524] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:37.674 [2024-12-13 05:48:37.605211] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x104ff60:1 started. 00:32:37.674 [2024-12-13 05:48:37.606584] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:37.674 [2024-12-13 05:48:37.606600] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:37.674 [2024-12-13 05:48:37.613428] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x104ff60 was disconnected and freed. delete nvme_qpair. 00:32:37.933 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.933 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:37.933 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:37.933 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.933 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.933 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.933 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.933 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.934 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.934 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.193 05:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.193 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.194 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.453 [2024-12-13 05:48:38.376190] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x103a320:1 started. 00:32:38.453 [2024-12-13 05:48:38.385390] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x103a320 was disconnected and freed. delete nvme_qpair. 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.453 [2024-12-13 05:48:38.458602] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:38.453 [2024-12-13 05:48:38.459628] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:38.453 [2024-12-13 05:48:38.459646] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:38.453 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.713 [2024-12-13 05:48:38.586423] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:38.713 05:48:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:38.972 [2024-12-13 05:48:38.851557] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:38.972 [2024-12-13 05:48:38.851592] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:38.972 [2024-12-13 05:48:38.851601] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:38.972 [2024-12-13 05:48:38.851606] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.910 [2024-12-13 05:48:39.718994] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:39.910 [2024-12-13 05:48:39.719015] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:39.910 [2024-12-13 05:48:39.720592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.910 [2024-12-13 05:48:39.720607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.910 [2024-12-13 05:48:39.720616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.910 [2024-12-13 05:48:39.720623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.910 [2024-12-13 05:48:39.720646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.910 [2024-12-13 05:48:39.720654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.910 [2024-12-13 05:48:39.720660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.910 [2024-12-13 05:48:39.720667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.910 [2024-12-13 05:48:39.720673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1021ef0 is same with the state(6) to be set 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:39.910 [2024-12-13 05:48:39.730605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1021ef0 (9): Bad file descriptor 00:32:39.910 [2024-12-13 05:48:39.740650] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:39.910 [2024-12-13 05:48:39.740661] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:39.910 [2024-12-13 05:48:39.740671] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:39.910 [2024-12-13 05:48:39.740676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:39.910 [2024-12-13 05:48:39.740691] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:39.910 [2024-12-13 05:48:39.740919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.910 [2024-12-13 05:48:39.740933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1021ef0 with addr=10.0.0.2, port=4420 00:32:39.910 [2024-12-13 05:48:39.740941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1021ef0 is same with the state(6) to be set 00:32:39.910 [2024-12-13 05:48:39.740952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1021ef0 (9): Bad file descriptor 00:32:39.910 [2024-12-13 05:48:39.740963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:39.910 [2024-12-13 05:48:39.740969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:39.910 [2024-12-13 05:48:39.740976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:39.910 [2024-12-13 05:48:39.740982] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:39.910 [2024-12-13 05:48:39.740987] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:39.910 [2024-12-13 05:48:39.740992] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:39.910 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.910 [2024-12-13 05:48:39.750721] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:39.910 [2024-12-13 05:48:39.750732] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:39.910 [2024-12-13 05:48:39.750736] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:39.910 [2024-12-13 05:48:39.750740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:39.910 [2024-12-13 05:48:39.750752] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:39.910 [2024-12-13 05:48:39.750919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.910 [2024-12-13 05:48:39.750930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1021ef0 with addr=10.0.0.2, port=4420 00:32:39.910 [2024-12-13 05:48:39.750938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1021ef0 is same with the state(6) to be set 00:32:39.910 [2024-12-13 05:48:39.750948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1021ef0 (9): Bad file descriptor 00:32:39.910 [2024-12-13 05:48:39.750957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:39.910 [2024-12-13 05:48:39.750963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:39.910 [2024-12-13 05:48:39.750970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:39.910 [2024-12-13 05:48:39.750975] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:39.910 [2024-12-13 05:48:39.750980] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:39.910 [2024-12-13 05:48:39.750983] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:39.910 [2024-12-13 05:48:39.760783] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:39.910 [2024-12-13 05:48:39.760792] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:39.910 [2024-12-13 05:48:39.760796] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:39.910 [2024-12-13 05:48:39.760800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:39.911 [2024-12-13 05:48:39.760812] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:39.911 [2024-12-13 05:48:39.761048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.911 [2024-12-13 05:48:39.761059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1021ef0 with addr=10.0.0.2, port=4420 00:32:39.911 [2024-12-13 05:48:39.761067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1021ef0 is same with the state(6) to be set 00:32:39.911 [2024-12-13 05:48:39.761077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1021ef0 (9): Bad file descriptor 00:32:39.911 [2024-12-13 05:48:39.761092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:39.911 [2024-12-13 05:48:39.761098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:39.911 [2024-12-13 05:48:39.761105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:39.911 [2024-12-13 05:48:39.761110] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:39.911 [2024-12-13 05:48:39.761114] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:39.911 [2024-12-13 05:48:39.761118] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.911 [2024-12-13 05:48:39.770843] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:39.911 [2024-12-13 05:48:39.770858] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:39.911 [2024-12-13 05:48:39.770862] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:39.911 [2024-12-13 05:48:39.770866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:39.911 [2024-12-13 05:48:39.770880] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:39.911 [2024-12-13 05:48:39.771023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.911 [2024-12-13 05:48:39.771037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1021ef0 with addr=10.0.0.2, port=4420 00:32:39.911 [2024-12-13 05:48:39.771044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1021ef0 is same with the state(6) to be set 00:32:39.911 [2024-12-13 05:48:39.771054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1021ef0 (9): Bad file descriptor 00:32:39.911 [2024-12-13 05:48:39.771063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:39.911 [2024-12-13 05:48:39.771068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:39.911 [2024-12-13 05:48:39.771075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:39.911 [2024-12-13 05:48:39.771080] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:39.911 [2024-12-13 05:48:39.771088] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:39.911 [2024-12-13 05:48:39.771092] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.911 [2024-12-13 05:48:39.780910] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:39.911 [2024-12-13 05:48:39.780922] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:39.911 [2024-12-13 05:48:39.780926] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:39.911 [2024-12-13 05:48:39.780929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:39.911 [2024-12-13 05:48:39.780942] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:39.911 [2024-12-13 05:48:39.781096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.911 [2024-12-13 05:48:39.781106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1021ef0 with addr=10.0.0.2, port=4420 00:32:39.911 [2024-12-13 05:48:39.781113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1021ef0 is same with the state(6) to be set 00:32:39.911 [2024-12-13 05:48:39.781122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1021ef0 (9): Bad file descriptor 00:32:39.911 [2024-12-13 05:48:39.781131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:39.911 [2024-12-13 05:48:39.781137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:39.911 [2024-12-13 05:48:39.781143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:39.911 [2024-12-13 05:48:39.781148] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:39.911 [2024-12-13 05:48:39.781152] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:39.911 [2024-12-13 05:48:39.781156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:39.911 [2024-12-13 05:48:39.790973] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:39.911 [2024-12-13 05:48:39.790986] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:39.911 [2024-12-13 05:48:39.790996] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:39.911 [2024-12-13 05:48:39.791001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:39.911 [2024-12-13 05:48:39.791014] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:39.911 [2024-12-13 05:48:39.791195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.911 [2024-12-13 05:48:39.791215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1021ef0 with addr=10.0.0.2, port=4420 00:32:39.911 [2024-12-13 05:48:39.791222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1021ef0 is same with the state(6) to be set 00:32:39.911 [2024-12-13 05:48:39.791232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1021ef0 (9): Bad file descriptor 00:32:39.911 [2024-12-13 05:48:39.791242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:39.911 [2024-12-13 05:48:39.791248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:39.911 [2024-12-13 05:48:39.791255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:39.911 [2024-12-13 05:48:39.791260] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:39.911 [2024-12-13 05:48:39.791265] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:39.911 [2024-12-13 05:48:39.791268] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:39.911 [2024-12-13 05:48:39.801045] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:39.911 [2024-12-13 05:48:39.801055] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:39.911 [2024-12-13 05:48:39.801059] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:39.911 [2024-12-13 05:48:39.801062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:39.911 [2024-12-13 05:48:39.801074] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:39.911 [2024-12-13 05:48:39.801250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.911 [2024-12-13 05:48:39.801260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1021ef0 with addr=10.0.0.2, port=4420 00:32:39.911 [2024-12-13 05:48:39.801267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1021ef0 is same with the state(6) to be set 00:32:39.911 [2024-12-13 05:48:39.801277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1021ef0 (9): Bad file descriptor 00:32:39.911 [2024-12-13 05:48:39.801292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:39.911 [2024-12-13 05:48:39.801298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:39.911 [2024-12-13 05:48:39.801305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:39.911 [2024-12-13 05:48:39.801310] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:39.911 [2024-12-13 05:48:39.801314] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:39.911 [2024-12-13 05:48:39.801318] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:39.911 [2024-12-13 05:48:39.805888] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:39.911 [2024-12-13 05:48:39.805907] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:39.911 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:39.912 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:40.170 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:40.170 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:40.170 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.170 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:40.170 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.170 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:40.170 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.170 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:40.170 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:40.170 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:40.170 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:40.171 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:40.171 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:40.171 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:40.171 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:40.171 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.171 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:40.171 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.171 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:40.171 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.171 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:40.171 05:48:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.171 05:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.557 [2024-12-13 05:48:41.124947] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:41.557 [2024-12-13 05:48:41.124964] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:41.557 [2024-12-13 05:48:41.124976] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:41.557 [2024-12-13 05:48:41.251353] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:41.557 [2024-12-13 05:48:41.357995] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:41.557 [2024-12-13 05:48:41.358545] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x102ab80:1 started. 00:32:41.557 [2024-12-13 05:48:41.360140] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:41.557 [2024-12-13 05:48:41.360166] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.557 request: 00:32:41.557 { 00:32:41.557 "name": "nvme", 00:32:41.557 "trtype": "tcp", 00:32:41.557 "traddr": "10.0.0.2", 00:32:41.557 "adrfam": "ipv4", 00:32:41.557 "trsvcid": "8009", 00:32:41.557 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:41.557 "wait_for_attach": true, 00:32:41.557 "method": "bdev_nvme_start_discovery", 00:32:41.557 "req_id": 1 00:32:41.557 } 00:32:41.557 Got JSON-RPC error response 00:32:41.557 response: 00:32:41.557 { 00:32:41.557 "code": -17, 00:32:41.557 "message": "File exists" 00:32:41.557 } 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.557 [2024-12-13 05:48:41.404030] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x102ab80 was disconnected and freed. delete nvme_qpair. 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.557 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.558 request: 00:32:41.558 { 00:32:41.558 "name": "nvme_second", 00:32:41.558 "trtype": "tcp", 00:32:41.558 "traddr": "10.0.0.2", 00:32:41.558 "adrfam": "ipv4", 00:32:41.558 "trsvcid": "8009", 00:32:41.558 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:41.558 "wait_for_attach": true, 00:32:41.558 "method": "bdev_nvme_start_discovery", 00:32:41.558 "req_id": 1 00:32:41.558 } 00:32:41.558 Got JSON-RPC error response 00:32:41.558 response: 00:32:41.558 { 00:32:41.558 "code": -17, 00:32:41.558 "message": "File exists" 00:32:41.558 } 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:41.558 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.871 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.871 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:41.871 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:41.871 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:41.871 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:41.871 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:41.871 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.871 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:41.871 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.871 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:41.871 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.871 05:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.884 [2024-12-13 05:48:42.596049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.884 [2024-12-13 05:48:42.596075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102a610 with addr=10.0.0.2, port=8010 00:32:42.884 [2024-12-13 05:48:42.596089] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:42.884 [2024-12-13 05:48:42.596096] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:42.884 [2024-12-13 05:48:42.596102] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:43.932 [2024-12-13 05:48:43.598482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.932 [2024-12-13 05:48:43.598506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102a610 with addr=10.0.0.2, port=8010 00:32:43.932 [2024-12-13 05:48:43.598518] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:43.932 [2024-12-13 05:48:43.598524] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:43.932 [2024-12-13 05:48:43.598546] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:45.033 [2024-12-13 05:48:44.600669] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:45.033 request: 00:32:45.033 { 00:32:45.033 "name": "nvme_second", 00:32:45.033 "trtype": "tcp", 00:32:45.033 "traddr": "10.0.0.2", 00:32:45.033 "adrfam": "ipv4", 00:32:45.034 "trsvcid": "8010", 00:32:45.034 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:45.034 "wait_for_attach": false, 00:32:45.034 "attach_timeout_ms": 3000, 00:32:45.034 "method": "bdev_nvme_start_discovery", 00:32:45.034 "req_id": 1 00:32:45.034 } 00:32:45.034 Got JSON-RPC error response 00:32:45.034 response: 00:32:45.034 { 00:32:45.034 "code": -110, 00:32:45.034 "message": "Connection timed out" 00:32:45.034 } 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 488043 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:45.034 rmmod nvme_tcp 00:32:45.034 rmmod nvme_fabrics 00:32:45.034 rmmod nvme_keyring 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 487931 ']' 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 487931 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 487931 ']' 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 487931 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 487931 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 487931' 00:32:45.034 killing process with pid 487931 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 487931 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 487931 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.034 05:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.090 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:47.090 00:32:47.090 real 0m17.255s 00:32:47.090 user 0m20.662s 00:32:47.090 sys 0m5.833s 00:32:47.090 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.090 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.090 ************************************ 00:32:47.090 END TEST nvmf_host_discovery 00:32:47.090 ************************************ 00:32:47.090 05:48:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:47.090 05:48:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:47.090 05:48:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:47.090 05:48:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.090 ************************************ 00:32:47.090 START TEST nvmf_host_multipath_status 00:32:47.090 ************************************ 00:32:47.090 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:47.351 * Looking for test storage... 00:32:47.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:47.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.351 --rc genhtml_branch_coverage=1 00:32:47.351 --rc genhtml_function_coverage=1 00:32:47.351 --rc genhtml_legend=1 00:32:47.351 --rc geninfo_all_blocks=1 00:32:47.351 --rc geninfo_unexecuted_blocks=1 00:32:47.351 00:32:47.351 ' 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:47.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.351 --rc genhtml_branch_coverage=1 00:32:47.351 --rc genhtml_function_coverage=1 00:32:47.351 --rc genhtml_legend=1 00:32:47.351 --rc geninfo_all_blocks=1 00:32:47.351 --rc geninfo_unexecuted_blocks=1 00:32:47.351 00:32:47.351 ' 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:47.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.351 --rc genhtml_branch_coverage=1 00:32:47.351 --rc genhtml_function_coverage=1 00:32:47.351 --rc genhtml_legend=1 00:32:47.351 --rc geninfo_all_blocks=1 00:32:47.351 --rc geninfo_unexecuted_blocks=1 00:32:47.351 00:32:47.351 ' 00:32:47.351 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:47.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.351 --rc genhtml_branch_coverage=1 00:32:47.351 --rc genhtml_function_coverage=1 00:32:47.351 --rc genhtml_legend=1 00:32:47.351 --rc geninfo_all_blocks=1 00:32:47.351 --rc geninfo_unexecuted_blocks=1 00:32:47.351 00:32:47.351 ' 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:47.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:47.352 05:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:53.922 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:53.922 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:53.922 Found net devices under 0000:af:00.0: cvl_0_0 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.922 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:53.922 Found net devices under 0000:af:00.1: cvl_0_1 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:53.923 05:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:53.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:53.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:32:53.923 00:32:53.923 --- 10.0.0.2 ping statistics --- 00:32:53.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.923 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:53.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:53.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:32:53.923 00:32:53.923 --- 10.0.0.1 ping statistics --- 00:32:53.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.923 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=493037 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 493037 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 493037 ']' 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:53.923 [2024-12-13 05:48:53.243253] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:53.923 [2024-12-13 05:48:53.243307] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.923 [2024-12-13 05:48:53.322809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:53.923 [2024-12-13 05:48:53.344831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.923 [2024-12-13 05:48:53.344866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.923 [2024-12-13 05:48:53.344874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.923 [2024-12-13 05:48:53.344879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.923 [2024-12-13 05:48:53.344885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.923 [2024-12-13 05:48:53.345957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.923 [2024-12-13 05:48:53.345959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:53.923 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.924 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=493037 00:32:53.924 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:53.924 [2024-12-13 05:48:53.642470] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.924 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:53.924 Malloc0 00:32:53.924 05:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:54.182 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:54.440 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:54.440 [2024-12-13 05:48:54.454062] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.699 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:54.699 [2024-12-13 05:48:54.650518] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:54.699 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=493289 00:32:54.699 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:54.699 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:54.699 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 493289 /var/tmp/bdevperf.sock 00:32:54.699 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 493289 ']' 00:32:54.699 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:54.699 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:54.699 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:54.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:54.699 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:54.699 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:54.957 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:54.957 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:54.957 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:55.215 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:55.472 Nvme0n1 00:32:55.472 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:55.728 Nvme0n1 00:32:55.728 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:55.728 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:58.254 05:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:58.254 05:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:58.254 05:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:58.254 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:59.185 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:59.185 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:59.185 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.185 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:59.443 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.443 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:59.443 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.443 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:59.700 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:59.700 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:59.700 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.700 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:59.958 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.958 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:59.958 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:59.958 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.215 05:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.215 05:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:00.215 05:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.215 05:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:00.215 05:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.215 05:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:00.215 05:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.215 05:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:00.473 05:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.473 05:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:00.473 05:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:00.730 05:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:00.987 05:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:01.919 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:01.919 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:01.919 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.919 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:02.177 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:02.177 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:02.177 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.177 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:02.435 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.435 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:02.435 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:02.435 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.692 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.692 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:02.692 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:02.692 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.692 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.692 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:02.693 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.693 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:02.950 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.950 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:02.950 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.950 05:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:03.208 05:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.208 05:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:03.208 05:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:03.465 05:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:03.722 05:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:04.655 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:04.655 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:04.655 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.655 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:04.913 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.913 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:04.913 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.913 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:05.170 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:05.170 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:05.170 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.170 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:05.170 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.170 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:05.170 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.170 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:05.428 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.428 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:05.428 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.428 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:05.685 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.685 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:05.685 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.685 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:05.943 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.943 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:05.943 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:06.200 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:06.200 05:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:07.572 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:07.572 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:07.572 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.572 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:07.572 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.572 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:07.572 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.572 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.830 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.830 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:07.830 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.830 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:07.830 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.830 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:07.830 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.830 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:08.088 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.088 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:08.088 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:08.088 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.345 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.345 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:08.345 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:08.345 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.603 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.603 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:08.603 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:08.860 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:08.860 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:10.230 05:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:10.230 05:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:10.230 05:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.230 05:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:10.230 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.230 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:10.230 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.230 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:10.230 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.230 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:10.230 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.230 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:10.488 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.488 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:10.488 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.488 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:10.746 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.746 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:10.746 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.746 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:11.003 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:11.003 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:11.003 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.003 05:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:11.261 05:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:11.261 05:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:11.261 05:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:11.261 05:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:11.518 05:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:12.450 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:12.450 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:12.450 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.450 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:12.708 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:12.708 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:12.708 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.708 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:12.966 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.966 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:12.966 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.966 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:13.224 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.224 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:13.224 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.224 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:13.481 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.481 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:13.481 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.481 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:13.481 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:13.481 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:13.481 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.481 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:13.739 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.739 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:13.997 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:13.997 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:14.254 05:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:14.512 05:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:15.444 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:15.444 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:15.444 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.444 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:15.701 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.701 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:15.701 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.701 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:15.958 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.958 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:15.958 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.958 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:15.958 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.958 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:15.958 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.958 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:16.215 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.215 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:16.215 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:16.215 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.472 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.472 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:16.472 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.472 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:16.729 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.729 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:16.729 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:16.986 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:17.244 05:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:18.174 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:18.174 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:18.174 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.174 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:18.431 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:18.431 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:18.431 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.431 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:18.431 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.431 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:18.431 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.431 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:18.689 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.689 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:18.689 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:18.689 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.948 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.948 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:18.948 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.948 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:19.206 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.206 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:19.206 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.206 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:19.464 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.464 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:19.464 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:19.722 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:19.981 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:20.916 05:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:20.916 05:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:20.916 05:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.916 05:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:21.174 05:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.174 05:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:21.174 05:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.174 05:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:21.174 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.174 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:21.174 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.174 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:21.432 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.432 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:21.432 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.432 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:21.690 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.690 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:21.690 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.690 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:21.948 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.948 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:21.948 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:21.948 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.207 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.207 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:22.207 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:22.466 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:22.466 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:23.841 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:23.841 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:23.841 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.841 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:23.841 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.841 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:23.841 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:23.841 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.100 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:24.100 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:24.100 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.100 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:24.100 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.100 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:24.100 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.100 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:24.359 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.359 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:24.359 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.359 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:24.617 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.617 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:24.617 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.617 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:24.876 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:24.876 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 493289 00:33:24.876 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 493289 ']' 00:33:24.876 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 493289 00:33:24.876 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:24.876 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:24.876 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493289 00:33:24.876 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:24.876 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:24.876 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493289' 00:33:24.876 killing process with pid 493289 00:33:24.876 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 493289 00:33:24.876 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 493289 00:33:24.876 { 00:33:24.876 "results": [ 00:33:24.876 { 00:33:24.876 "job": "Nvme0n1", 00:33:24.876 "core_mask": "0x4", 00:33:24.876 "workload": "verify", 00:33:24.876 "status": "terminated", 00:33:24.876 "verify_range": { 00:33:24.876 "start": 0, 00:33:24.876 "length": 16384 00:33:24.876 }, 00:33:24.876 "queue_depth": 128, 00:33:24.876 "io_size": 4096, 00:33:24.876 "runtime": 28.849352, 00:33:24.876 "iops": 10748.733628401775, 00:33:24.876 "mibps": 41.98724073594443, 00:33:24.876 "io_failed": 0, 00:33:24.876 "io_timeout": 0, 00:33:24.876 "avg_latency_us": 11888.79728966977, 00:33:24.876 "min_latency_us": 184.32, 00:33:24.876 "max_latency_us": 3019898.88 00:33:24.876 } 00:33:24.876 ], 00:33:24.876 "core_count": 1 00:33:24.876 } 00:33:25.139 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 493289 00:33:25.139 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:25.139 [2024-12-13 05:48:54.727121] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:25.139 [2024-12-13 05:48:54.727175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493289 ] 00:33:25.139 [2024-12-13 05:48:54.799721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.139 [2024-12-13 05:48:54.821775] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:25.139 Running I/O for 90 seconds... 00:33:25.139 11508.00 IOPS, 44.95 MiB/s [2024-12-13T04:49:25.154Z] 11560.50 IOPS, 45.16 MiB/s [2024-12-13T04:49:25.154Z] 11620.67 IOPS, 45.39 MiB/s [2024-12-13T04:49:25.154Z] 11637.25 IOPS, 45.46 MiB/s [2024-12-13T04:49:25.154Z] 11599.60 IOPS, 45.31 MiB/s [2024-12-13T04:49:25.154Z] 11616.67 IOPS, 45.38 MiB/s [2024-12-13T04:49:25.154Z] 11637.29 IOPS, 45.46 MiB/s [2024-12-13T04:49:25.154Z] 11662.12 IOPS, 45.56 MiB/s [2024-12-13T04:49:25.154Z] 11640.89 IOPS, 45.47 MiB/s [2024-12-13T04:49:25.154Z] 11636.90 IOPS, 45.46 MiB/s [2024-12-13T04:49:25.154Z] 11628.00 IOPS, 45.42 MiB/s [2024-12-13T04:49:25.154Z] 11627.92 IOPS, 45.42 MiB/s [2024-12-13T04:49:25.154Z] [2024-12-13 05:49:08.626689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.139 [2024-12-13 05:49:08.626728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.626763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.626772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.626785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.626792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.626805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.626812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.626825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.626831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.626844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.626850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.626862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.626869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.626881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.626888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.139 [2024-12-13 05:49:08.627434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:25.139 [2024-12-13 05:49:08.627447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.627459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.627472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.627479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.627491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.627498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.627510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.627517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.627530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.140 [2024-12-13 05:49:08.627537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.627550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.140 [2024-12-13 05:49:08.627557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.627981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.140 [2024-12-13 05:49:08.627998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.628654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.628661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:25.140 [2024-12-13 05:49:08.629269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.140 [2024-12-13 05:49:08.629277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.629977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.629987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.630004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.630010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.630027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.630033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.630049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.630056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.630072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.630079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.630095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.630102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.630120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.630127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.630143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.630150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.630167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.630174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.630193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.630200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.630217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.630224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:25.141 [2024-12-13 05:49:08.630240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.141 [2024-12-13 05:49:08.630246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:08.630604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:08.630611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:25.142 11409.15 IOPS, 44.57 MiB/s [2024-12-13T04:49:25.157Z] 10594.21 IOPS, 41.38 MiB/s [2024-12-13T04:49:25.157Z] 9887.93 IOPS, 38.62 MiB/s [2024-12-13T04:49:25.157Z] 9444.12 IOPS, 36.89 MiB/s [2024-12-13T04:49:25.157Z] 9566.24 IOPS, 37.37 MiB/s [2024-12-13T04:49:25.157Z] 9682.17 IOPS, 37.82 MiB/s [2024-12-13T04:49:25.157Z] 9871.68 IOPS, 38.56 MiB/s [2024-12-13T04:49:25.157Z] 10057.20 IOPS, 39.29 MiB/s [2024-12-13T04:49:25.157Z] 10216.95 IOPS, 39.91 MiB/s [2024-12-13T04:49:25.157Z] 10270.77 IOPS, 40.12 MiB/s [2024-12-13T04:49:25.157Z] 10323.78 IOPS, 40.33 MiB/s [2024-12-13T04:49:25.157Z] 10384.79 IOPS, 40.57 MiB/s [2024-12-13T04:49:25.157Z] 10504.88 IOPS, 41.03 MiB/s [2024-12-13T04:49:25.157Z] 10625.19 IOPS, 41.50 MiB/s [2024-12-13T04:49:25.157Z] [2024-12-13 05:49:22.425670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.425710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.425741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.425751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.425763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.425770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.425783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.425790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.425802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.425809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.425821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.425828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.425840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.425846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.425859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.425865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:25.142 [2024-12-13 05:49:22.426501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.142 [2024-12-13 05:49:22.426508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.426989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.426996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.427015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.427034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.427052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.427071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.427090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.427109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.427130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.427149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.427167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.143 [2024-12-13 05:49:22.427187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.143 [2024-12-13 05:49:22.427206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.427224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.143 [2024-12-13 05:49:22.427243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:25.143 [2024-12-13 05:49:22.427256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.144 [2024-12-13 05:49:22.427262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:25.144 [2024-12-13 05:49:22.427274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.144 [2024-12-13 05:49:22.427281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:25.144 [2024-12-13 05:49:22.427293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.144 [2024-12-13 05:49:22.427300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:25.144 [2024-12-13 05:49:22.427312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.144 [2024-12-13 05:49:22.427318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:25.144 [2024-12-13 05:49:22.427330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.144 [2024-12-13 05:49:22.427337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:25.144 [2024-12-13 05:49:22.427350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.144 [2024-12-13 05:49:22.427356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:25.144 [2024-12-13 05:49:22.427370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.144 [2024-12-13 05:49:22.427376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:25.144 [2024-12-13 05:49:22.427388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.144 [2024-12-13 05:49:22.427395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:25.144 [2024-12-13 05:49:22.427407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.144 [2024-12-13 05:49:22.427414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:25.144 [2024-12-13 05:49:22.427426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.144 [2024-12-13 05:49:22.427432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:25.144 [2024-12-13 05:49:22.427455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.144 [2024-12-13 05:49:22.427463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.144 10695.56 IOPS, 41.78 MiB/s [2024-12-13T04:49:25.159Z] 10728.79 IOPS, 41.91 MiB/s [2024-12-13T04:49:25.159Z] Received shutdown signal, test time was about 28.850015 seconds 00:33:25.144 00:33:25.144 Latency(us) 00:33:25.144 [2024-12-13T04:49:25.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.144 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:25.144 Verification LBA range: start 0x0 length 0x4000 00:33:25.144 Nvme0n1 : 28.85 10748.73 41.99 0.00 0.00 11888.80 184.32 3019898.88 00:33:25.144 [2024-12-13T04:49:25.159Z] =================================================================================================================== 00:33:25.144 [2024-12-13T04:49:25.159Z] Total : 10748.73 41.99 0.00 0.00 11888.80 184.32 3019898.88 00:33:25.144 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:25.144 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:25.144 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:25.144 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:25.144 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:25.144 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:25.144 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:25.144 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:25.144 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:25.144 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:25.144 rmmod nvme_tcp 00:33:25.144 rmmod nvme_fabrics 00:33:25.144 rmmod nvme_keyring 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 493037 ']' 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 493037 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 493037 ']' 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 493037 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493037 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493037' 00:33:25.402 killing process with pid 493037 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 493037 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 493037 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.402 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.938 05:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:27.938 00:33:27.938 real 0m40.391s 00:33:27.938 user 1m49.707s 00:33:27.938 sys 0m11.451s 00:33:27.938 05:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.938 05:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:27.938 ************************************ 00:33:27.938 END TEST nvmf_host_multipath_status 00:33:27.938 ************************************ 00:33:27.938 05:49:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:27.938 05:49:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.939 ************************************ 00:33:27.939 START TEST nvmf_discovery_remove_ifc 00:33:27.939 ************************************ 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:27.939 * Looking for test storage... 00:33:27.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:27.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.939 --rc genhtml_branch_coverage=1 00:33:27.939 --rc genhtml_function_coverage=1 00:33:27.939 --rc genhtml_legend=1 00:33:27.939 --rc geninfo_all_blocks=1 00:33:27.939 --rc geninfo_unexecuted_blocks=1 00:33:27.939 00:33:27.939 ' 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:27.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.939 --rc genhtml_branch_coverage=1 00:33:27.939 --rc genhtml_function_coverage=1 00:33:27.939 --rc genhtml_legend=1 00:33:27.939 --rc geninfo_all_blocks=1 00:33:27.939 --rc geninfo_unexecuted_blocks=1 00:33:27.939 00:33:27.939 ' 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:27.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.939 --rc genhtml_branch_coverage=1 00:33:27.939 --rc genhtml_function_coverage=1 00:33:27.939 --rc genhtml_legend=1 00:33:27.939 --rc geninfo_all_blocks=1 00:33:27.939 --rc geninfo_unexecuted_blocks=1 00:33:27.939 00:33:27.939 ' 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:27.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.939 --rc genhtml_branch_coverage=1 00:33:27.939 --rc genhtml_function_coverage=1 00:33:27.939 --rc genhtml_legend=1 00:33:27.939 --rc geninfo_all_blocks=1 00:33:27.939 --rc geninfo_unexecuted_blocks=1 00:33:27.939 00:33:27.939 ' 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.939 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:27.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:27.940 05:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:34.509 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:34.509 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.509 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:34.509 Found net devices under 0000:af:00.0: cvl_0_0 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:34.510 Found net devices under 0000:af:00.1: cvl_0_1 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:34.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:34.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:33:34.510 00:33:34.510 --- 10.0.0.2 ping statistics --- 00:33:34.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.510 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:34.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:34.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:33:34.510 00:33:34.510 --- 10.0.0.1 ping statistics --- 00:33:34.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.510 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=501629 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 501629 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 501629 ']' 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:34.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.510 [2024-12-13 05:49:33.696659] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:34.510 [2024-12-13 05:49:33.696699] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:34.510 [2024-12-13 05:49:33.756340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.510 [2024-12-13 05:49:33.777533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:34.510 [2024-12-13 05:49:33.777566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:34.510 [2024-12-13 05:49:33.777572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:34.510 [2024-12-13 05:49:33.777578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:34.510 [2024-12-13 05:49:33.777583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:34.510 [2024-12-13 05:49:33.778069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.510 [2024-12-13 05:49:33.915374] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:34.510 [2024-12-13 05:49:33.923554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:34.510 null0 00:33:34.510 [2024-12-13 05:49:33.955536] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=501648 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 501648 /tmp/host.sock 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 501648 ']' 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:34.510 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:34.510 05:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.510 [2024-12-13 05:49:34.022713] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:34.511 [2024-12-13 05:49:34.022754] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501648 ] 00:33:34.511 [2024-12-13 05:49:34.097221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.511 [2024-12-13 05:49:34.119915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.511 05:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.447 [2024-12-13 05:49:35.271510] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:35.447 [2024-12-13 05:49:35.271531] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:35.447 [2024-12-13 05:49:35.271546] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:35.447 [2024-12-13 05:49:35.400943] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:35.705 [2024-12-13 05:49:35.582863] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:35.705 [2024-12-13 05:49:35.583532] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21ccb50:1 started. 00:33:35.705 [2024-12-13 05:49:35.584830] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:35.705 [2024-12-13 05:49:35.584869] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:35.705 [2024-12-13 05:49:35.584887] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:35.705 [2024-12-13 05:49:35.584899] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:35.705 [2024-12-13 05:49:35.584920] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:35.705 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.705 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:35.705 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:35.705 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.705 [2024-12-13 05:49:35.590842] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21ccb50 was disconnected and freed. delete nvme_qpair. 00:33:35.705 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:35.705 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.705 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:35.705 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.705 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:35.705 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.705 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:35.705 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:35.705 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:35.963 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:35.963 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:35.963 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.963 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:35.963 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.963 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:35.963 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.963 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:35.963 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.963 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:35.963 05:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:36.899 05:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:36.899 05:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:36.899 05:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:36.899 05:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.899 05:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:36.899 05:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.899 05:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:36.899 05:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.899 05:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:36.899 05:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:37.835 05:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:37.835 05:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.835 05:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:37.835 05:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.835 05:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:37.835 05:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.835 05:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:38.094 05:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.094 05:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:38.094 05:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:39.028 05:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:39.028 05:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.028 05:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:39.028 05:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.028 05:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:39.028 05:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.028 05:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:39.028 05:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.028 05:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:39.028 05:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:39.963 05:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:39.963 05:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.963 05:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:39.963 05:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:39.963 05:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.963 05:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.963 05:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:39.963 05:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.222 05:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:40.222 05:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:41.156 05:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.156 05:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.156 05:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.156 05:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.156 05:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.156 05:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.156 05:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.156 05:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.156 [2024-12-13 05:49:41.026493] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:41.156 [2024-12-13 05:49:41.026534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.156 [2024-12-13 05:49:41.026544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.156 [2024-12-13 05:49:41.026553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.156 [2024-12-13 05:49:41.026560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.156 [2024-12-13 05:49:41.026568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.156 [2024-12-13 05:49:41.026574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.156 [2024-12-13 05:49:41.026581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.156 [2024-12-13 05:49:41.026589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.156 [2024-12-13 05:49:41.026596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.157 [2024-12-13 05:49:41.026602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.157 [2024-12-13 05:49:41.026609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a9290 is same with the state(6) to be set 00:33:41.157 05:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:41.157 [2024-12-13 05:49:41.036497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a9290 (9): Bad file descriptor 00:33:41.157 05:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:41.157 [2024-12-13 05:49:41.046532] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:41.157 [2024-12-13 05:49:41.046544] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:41.157 [2024-12-13 05:49:41.046551] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:41.157 [2024-12-13 05:49:41.046555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:41.157 [2024-12-13 05:49:41.046576] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:42.092 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:42.092 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.092 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:42.092 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.092 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:42.092 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.092 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:42.092 [2024-12-13 05:49:42.086515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:42.092 [2024-12-13 05:49:42.086593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a9290 with addr=10.0.0.2, port=4420 00:33:42.092 [2024-12-13 05:49:42.086635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a9290 is same with the state(6) to be set 00:33:42.092 [2024-12-13 05:49:42.086686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a9290 (9): Bad file descriptor 00:33:42.092 [2024-12-13 05:49:42.087630] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:42.092 [2024-12-13 05:49:42.087691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:42.092 [2024-12-13 05:49:42.087714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:42.092 [2024-12-13 05:49:42.087738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:42.092 [2024-12-13 05:49:42.087758] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:42.092 [2024-12-13 05:49:42.087774] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:42.092 [2024-12-13 05:49:42.087787] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:42.092 [2024-12-13 05:49:42.087808] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:42.092 [2024-12-13 05:49:42.087822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:42.092 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.350 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:42.350 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:43.286 [2024-12-13 05:49:43.090329] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:43.286 [2024-12-13 05:49:43.090348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:43.286 [2024-12-13 05:49:43.090358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:43.286 [2024-12-13 05:49:43.090365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:43.286 [2024-12-13 05:49:43.090371] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:43.286 [2024-12-13 05:49:43.090377] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:43.286 [2024-12-13 05:49:43.090382] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:43.286 [2024-12-13 05:49:43.090386] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:43.286 [2024-12-13 05:49:43.090407] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:43.286 [2024-12-13 05:49:43.090426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.286 [2024-12-13 05:49:43.090434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.286 [2024-12-13 05:49:43.090443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.286 [2024-12-13 05:49:43.090465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.286 [2024-12-13 05:49:43.090472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.286 [2024-12-13 05:49:43.090482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.286 [2024-12-13 05:49:43.090490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.286 [2024-12-13 05:49:43.090496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.286 [2024-12-13 05:49:43.090503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.286 [2024-12-13 05:49:43.090509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.287 [2024-12-13 05:49:43.090516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:43.287 [2024-12-13 05:49:43.090880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21989e0 (9): Bad file descriptor 00:33:43.287 [2024-12-13 05:49:43.091890] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:43.287 [2024-12-13 05:49:43.091901] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:43.287 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:44.663 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.663 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.663 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.663 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.663 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.663 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.663 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.663 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.663 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:44.663 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:45.232 [2024-12-13 05:49:45.188636] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:45.232 [2024-12-13 05:49:45.188651] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:45.232 [2024-12-13 05:49:45.188666] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:45.493 [2024-12-13 05:49:45.274930] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:45.493 [2024-12-13 05:49:45.336435] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:45.493 [2024-12-13 05:49:45.336972] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x21ab540:1 started. 00:33:45.493 [2024-12-13 05:49:45.337974] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:45.493 [2024-12-13 05:49:45.338005] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:45.493 [2024-12-13 05:49:45.338022] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:45.493 [2024-12-13 05:49:45.338035] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:45.493 [2024-12-13 05:49:45.338042] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:45.493 [2024-12-13 05:49:45.345872] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x21ab540 was disconnected and freed. delete nvme_qpair. 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 501648 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 501648 ']' 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 501648 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501648 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501648' 00:33:45.493 killing process with pid 501648 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 501648 00:33:45.493 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 501648 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:45.753 rmmod nvme_tcp 00:33:45.753 rmmod nvme_fabrics 00:33:45.753 rmmod nvme_keyring 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 501629 ']' 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 501629 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 501629 ']' 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 501629 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501629 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501629' 00:33:45.753 killing process with pid 501629 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 501629 00:33:45.753 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 501629 00:33:46.012 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:46.012 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:46.012 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:46.012 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:46.012 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:46.012 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:46.012 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:46.012 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:46.012 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:46.012 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.012 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.012 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.545 05:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:48.545 00:33:48.545 real 0m20.389s 00:33:48.545 user 0m24.709s 00:33:48.545 sys 0m5.775s 00:33:48.545 05:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:48.545 05:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.545 ************************************ 00:33:48.545 END TEST nvmf_discovery_remove_ifc 00:33:48.545 ************************************ 00:33:48.545 05:49:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:48.545 05:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:48.545 05:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:48.545 05:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.545 ************************************ 00:33:48.545 START TEST nvmf_identify_kernel_target 00:33:48.545 ************************************ 00:33:48.545 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:48.545 * Looking for test storage... 00:33:48.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:48.545 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:48.545 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:33:48.545 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:48.545 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:48.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.546 --rc genhtml_branch_coverage=1 00:33:48.546 --rc genhtml_function_coverage=1 00:33:48.546 --rc genhtml_legend=1 00:33:48.546 --rc geninfo_all_blocks=1 00:33:48.546 --rc geninfo_unexecuted_blocks=1 00:33:48.546 00:33:48.546 ' 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:48.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.546 --rc genhtml_branch_coverage=1 00:33:48.546 --rc genhtml_function_coverage=1 00:33:48.546 --rc genhtml_legend=1 00:33:48.546 --rc geninfo_all_blocks=1 00:33:48.546 --rc geninfo_unexecuted_blocks=1 00:33:48.546 00:33:48.546 ' 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:48.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.546 --rc genhtml_branch_coverage=1 00:33:48.546 --rc genhtml_function_coverage=1 00:33:48.546 --rc genhtml_legend=1 00:33:48.546 --rc geninfo_all_blocks=1 00:33:48.546 --rc geninfo_unexecuted_blocks=1 00:33:48.546 00:33:48.546 ' 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:48.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.546 --rc genhtml_branch_coverage=1 00:33:48.546 --rc genhtml_function_coverage=1 00:33:48.546 --rc genhtml_legend=1 00:33:48.546 --rc geninfo_all_blocks=1 00:33:48.546 --rc geninfo_unexecuted_blocks=1 00:33:48.546 00:33:48.546 ' 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:48.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:48.546 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:48.547 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.547 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.547 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.547 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:48.547 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:48.547 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:48.547 05:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:53.815 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:53.816 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:53.816 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:53.816 Found net devices under 0000:af:00.0: cvl_0_0 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:53.816 Found net devices under 0000:af:00.1: cvl_0_1 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:53.816 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:54.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:33:54.075 00:33:54.075 --- 10.0.0.2 ping statistics --- 00:33:54.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.075 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:54.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:33:54.075 00:33:54.075 --- 10.0.0.1 ping statistics --- 00:33:54.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.075 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:54.075 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:54.076 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:54.076 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:54.076 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:33:54.076 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:54.076 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:54.076 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:54.076 05:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:56.608 Waiting for block devices as requested 00:33:56.867 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:56.867 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:56.867 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:57.126 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:57.126 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:57.126 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:57.385 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:57.385 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:57.385 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:57.385 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:57.643 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:57.643 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:57.643 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:57.643 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:57.902 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:57.902 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:57.902 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:58.161 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:58.161 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:58.161 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:58.161 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:58.161 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:58.161 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:58.161 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:58.161 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:58.161 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:58.161 No valid GPT data, bailing 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:58.161 00:33:58.161 Discovery Log Number of Records 2, Generation counter 2 00:33:58.161 =====Discovery Log Entry 0====== 00:33:58.161 trtype: tcp 00:33:58.161 adrfam: ipv4 00:33:58.161 subtype: current discovery subsystem 00:33:58.161 treq: not specified, sq flow control disable supported 00:33:58.161 portid: 1 00:33:58.161 trsvcid: 4420 00:33:58.161 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:58.161 traddr: 10.0.0.1 00:33:58.161 eflags: none 00:33:58.161 sectype: none 00:33:58.161 =====Discovery Log Entry 1====== 00:33:58.161 trtype: tcp 00:33:58.161 adrfam: ipv4 00:33:58.161 subtype: nvme subsystem 00:33:58.161 treq: not specified, sq flow control disable supported 00:33:58.161 portid: 1 00:33:58.161 trsvcid: 4420 00:33:58.161 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:58.161 traddr: 10.0.0.1 00:33:58.161 eflags: none 00:33:58.161 sectype: none 00:33:58.161 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:58.161 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:58.421 ===================================================== 00:33:58.421 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:58.421 ===================================================== 00:33:58.421 Controller Capabilities/Features 00:33:58.421 ================================ 00:33:58.421 Vendor ID: 0000 00:33:58.421 Subsystem Vendor ID: 0000 00:33:58.421 Serial Number: 83f1375191d8f5569cbf 00:33:58.421 Model Number: Linux 00:33:58.421 Firmware Version: 6.8.9-20 00:33:58.421 Recommended Arb Burst: 0 00:33:58.421 IEEE OUI Identifier: 00 00 00 00:33:58.421 Multi-path I/O 00:33:58.421 May have multiple subsystem ports: No 00:33:58.421 May have multiple controllers: No 00:33:58.421 Associated with SR-IOV VF: No 00:33:58.421 Max Data Transfer Size: Unlimited 00:33:58.421 Max Number of Namespaces: 0 00:33:58.421 Max Number of I/O Queues: 1024 00:33:58.421 NVMe Specification Version (VS): 1.3 00:33:58.421 NVMe Specification Version (Identify): 1.3 00:33:58.421 Maximum Queue Entries: 1024 00:33:58.421 Contiguous Queues Required: No 00:33:58.421 Arbitration Mechanisms Supported 00:33:58.421 Weighted Round Robin: Not Supported 00:33:58.421 Vendor Specific: Not Supported 00:33:58.421 Reset Timeout: 7500 ms 00:33:58.421 Doorbell Stride: 4 bytes 00:33:58.421 NVM Subsystem Reset: Not Supported 00:33:58.421 Command Sets Supported 00:33:58.421 NVM Command Set: Supported 00:33:58.421 Boot Partition: Not Supported 00:33:58.421 Memory Page Size Minimum: 4096 bytes 00:33:58.421 Memory Page Size Maximum: 4096 bytes 00:33:58.421 Persistent Memory Region: Not Supported 00:33:58.421 Optional Asynchronous Events Supported 00:33:58.421 Namespace Attribute Notices: Not Supported 00:33:58.421 Firmware Activation Notices: Not Supported 00:33:58.421 ANA Change Notices: Not Supported 00:33:58.421 PLE Aggregate Log Change Notices: Not Supported 00:33:58.421 LBA Status Info Alert Notices: Not Supported 00:33:58.421 EGE Aggregate Log Change Notices: Not Supported 00:33:58.421 Normal NVM Subsystem Shutdown event: Not Supported 00:33:58.421 Zone Descriptor Change Notices: Not Supported 00:33:58.421 Discovery Log Change Notices: Supported 00:33:58.421 Controller Attributes 00:33:58.421 128-bit Host Identifier: Not Supported 00:33:58.421 Non-Operational Permissive Mode: Not Supported 00:33:58.421 NVM Sets: Not Supported 00:33:58.421 Read Recovery Levels: Not Supported 00:33:58.421 Endurance Groups: Not Supported 00:33:58.421 Predictable Latency Mode: Not Supported 00:33:58.421 Traffic Based Keep ALive: Not Supported 00:33:58.421 Namespace Granularity: Not Supported 00:33:58.422 SQ Associations: Not Supported 00:33:58.422 UUID List: Not Supported 00:33:58.422 Multi-Domain Subsystem: Not Supported 00:33:58.422 Fixed Capacity Management: Not Supported 00:33:58.422 Variable Capacity Management: Not Supported 00:33:58.422 Delete Endurance Group: Not Supported 00:33:58.422 Delete NVM Set: Not Supported 00:33:58.422 Extended LBA Formats Supported: Not Supported 00:33:58.422 Flexible Data Placement Supported: Not Supported 00:33:58.422 00:33:58.422 Controller Memory Buffer Support 00:33:58.422 ================================ 00:33:58.422 Supported: No 00:33:58.422 00:33:58.422 Persistent Memory Region Support 00:33:58.422 ================================ 00:33:58.422 Supported: No 00:33:58.422 00:33:58.422 Admin Command Set Attributes 00:33:58.422 ============================ 00:33:58.422 Security Send/Receive: Not Supported 00:33:58.422 Format NVM: Not Supported 00:33:58.422 Firmware Activate/Download: Not Supported 00:33:58.422 Namespace Management: Not Supported 00:33:58.422 Device Self-Test: Not Supported 00:33:58.422 Directives: Not Supported 00:33:58.422 NVMe-MI: Not Supported 00:33:58.422 Virtualization Management: Not Supported 00:33:58.422 Doorbell Buffer Config: Not Supported 00:33:58.422 Get LBA Status Capability: Not Supported 00:33:58.422 Command & Feature Lockdown Capability: Not Supported 00:33:58.422 Abort Command Limit: 1 00:33:58.422 Async Event Request Limit: 1 00:33:58.422 Number of Firmware Slots: N/A 00:33:58.422 Firmware Slot 1 Read-Only: N/A 00:33:58.422 Firmware Activation Without Reset: N/A 00:33:58.422 Multiple Update Detection Support: N/A 00:33:58.422 Firmware Update Granularity: No Information Provided 00:33:58.422 Per-Namespace SMART Log: No 00:33:58.422 Asymmetric Namespace Access Log Page: Not Supported 00:33:58.422 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:58.422 Command Effects Log Page: Not Supported 00:33:58.422 Get Log Page Extended Data: Supported 00:33:58.422 Telemetry Log Pages: Not Supported 00:33:58.422 Persistent Event Log Pages: Not Supported 00:33:58.422 Supported Log Pages Log Page: May Support 00:33:58.422 Commands Supported & Effects Log Page: Not Supported 00:33:58.422 Feature Identifiers & Effects Log Page:May Support 00:33:58.422 NVMe-MI Commands & Effects Log Page: May Support 00:33:58.422 Data Area 4 for Telemetry Log: Not Supported 00:33:58.422 Error Log Page Entries Supported: 1 00:33:58.422 Keep Alive: Not Supported 00:33:58.422 00:33:58.422 NVM Command Set Attributes 00:33:58.422 ========================== 00:33:58.422 Submission Queue Entry Size 00:33:58.422 Max: 1 00:33:58.422 Min: 1 00:33:58.422 Completion Queue Entry Size 00:33:58.422 Max: 1 00:33:58.422 Min: 1 00:33:58.422 Number of Namespaces: 0 00:33:58.422 Compare Command: Not Supported 00:33:58.422 Write Uncorrectable Command: Not Supported 00:33:58.422 Dataset Management Command: Not Supported 00:33:58.422 Write Zeroes Command: Not Supported 00:33:58.422 Set Features Save Field: Not Supported 00:33:58.422 Reservations: Not Supported 00:33:58.422 Timestamp: Not Supported 00:33:58.422 Copy: Not Supported 00:33:58.422 Volatile Write Cache: Not Present 00:33:58.422 Atomic Write Unit (Normal): 1 00:33:58.422 Atomic Write Unit (PFail): 1 00:33:58.422 Atomic Compare & Write Unit: 1 00:33:58.422 Fused Compare & Write: Not Supported 00:33:58.422 Scatter-Gather List 00:33:58.422 SGL Command Set: Supported 00:33:58.422 SGL Keyed: Not Supported 00:33:58.422 SGL Bit Bucket Descriptor: Not Supported 00:33:58.422 SGL Metadata Pointer: Not Supported 00:33:58.422 Oversized SGL: Not Supported 00:33:58.422 SGL Metadata Address: Not Supported 00:33:58.422 SGL Offset: Supported 00:33:58.422 Transport SGL Data Block: Not Supported 00:33:58.422 Replay Protected Memory Block: Not Supported 00:33:58.422 00:33:58.422 Firmware Slot Information 00:33:58.422 ========================= 00:33:58.422 Active slot: 0 00:33:58.422 00:33:58.422 00:33:58.422 Error Log 00:33:58.422 ========= 00:33:58.422 00:33:58.422 Active Namespaces 00:33:58.422 ================= 00:33:58.422 Discovery Log Page 00:33:58.422 ================== 00:33:58.422 Generation Counter: 2 00:33:58.422 Number of Records: 2 00:33:58.422 Record Format: 0 00:33:58.422 00:33:58.422 Discovery Log Entry 0 00:33:58.422 ---------------------- 00:33:58.422 Transport Type: 3 (TCP) 00:33:58.422 Address Family: 1 (IPv4) 00:33:58.422 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:58.422 Entry Flags: 00:33:58.422 Duplicate Returned Information: 0 00:33:58.422 Explicit Persistent Connection Support for Discovery: 0 00:33:58.422 Transport Requirements: 00:33:58.422 Secure Channel: Not Specified 00:33:58.422 Port ID: 1 (0x0001) 00:33:58.422 Controller ID: 65535 (0xffff) 00:33:58.422 Admin Max SQ Size: 32 00:33:58.422 Transport Service Identifier: 4420 00:33:58.422 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:58.422 Transport Address: 10.0.0.1 00:33:58.422 Discovery Log Entry 1 00:33:58.422 ---------------------- 00:33:58.422 Transport Type: 3 (TCP) 00:33:58.422 Address Family: 1 (IPv4) 00:33:58.422 Subsystem Type: 2 (NVM Subsystem) 00:33:58.422 Entry Flags: 00:33:58.422 Duplicate Returned Information: 0 00:33:58.422 Explicit Persistent Connection Support for Discovery: 0 00:33:58.422 Transport Requirements: 00:33:58.422 Secure Channel: Not Specified 00:33:58.422 Port ID: 1 (0x0001) 00:33:58.422 Controller ID: 65535 (0xffff) 00:33:58.422 Admin Max SQ Size: 32 00:33:58.422 Transport Service Identifier: 4420 00:33:58.422 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:58.422 Transport Address: 10.0.0.1 00:33:58.422 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:58.422 get_feature(0x01) failed 00:33:58.422 get_feature(0x02) failed 00:33:58.422 get_feature(0x04) failed 00:33:58.422 ===================================================== 00:33:58.422 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:58.422 ===================================================== 00:33:58.422 Controller Capabilities/Features 00:33:58.422 ================================ 00:33:58.422 Vendor ID: 0000 00:33:58.422 Subsystem Vendor ID: 0000 00:33:58.422 Serial Number: 5592fcca18a0cb539c01 00:33:58.422 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:58.422 Firmware Version: 6.8.9-20 00:33:58.422 Recommended Arb Burst: 6 00:33:58.422 IEEE OUI Identifier: 00 00 00 00:33:58.422 Multi-path I/O 00:33:58.422 May have multiple subsystem ports: Yes 00:33:58.422 May have multiple controllers: Yes 00:33:58.422 Associated with SR-IOV VF: No 00:33:58.422 Max Data Transfer Size: Unlimited 00:33:58.422 Max Number of Namespaces: 1024 00:33:58.422 Max Number of I/O Queues: 128 00:33:58.422 NVMe Specification Version (VS): 1.3 00:33:58.422 NVMe Specification Version (Identify): 1.3 00:33:58.422 Maximum Queue Entries: 1024 00:33:58.422 Contiguous Queues Required: No 00:33:58.422 Arbitration Mechanisms Supported 00:33:58.422 Weighted Round Robin: Not Supported 00:33:58.422 Vendor Specific: Not Supported 00:33:58.422 Reset Timeout: 7500 ms 00:33:58.422 Doorbell Stride: 4 bytes 00:33:58.422 NVM Subsystem Reset: Not Supported 00:33:58.422 Command Sets Supported 00:33:58.422 NVM Command Set: Supported 00:33:58.422 Boot Partition: Not Supported 00:33:58.422 Memory Page Size Minimum: 4096 bytes 00:33:58.422 Memory Page Size Maximum: 4096 bytes 00:33:58.422 Persistent Memory Region: Not Supported 00:33:58.422 Optional Asynchronous Events Supported 00:33:58.422 Namespace Attribute Notices: Supported 00:33:58.422 Firmware Activation Notices: Not Supported 00:33:58.422 ANA Change Notices: Supported 00:33:58.422 PLE Aggregate Log Change Notices: Not Supported 00:33:58.422 LBA Status Info Alert Notices: Not Supported 00:33:58.422 EGE Aggregate Log Change Notices: Not Supported 00:33:58.422 Normal NVM Subsystem Shutdown event: Not Supported 00:33:58.422 Zone Descriptor Change Notices: Not Supported 00:33:58.422 Discovery Log Change Notices: Not Supported 00:33:58.422 Controller Attributes 00:33:58.422 128-bit Host Identifier: Supported 00:33:58.422 Non-Operational Permissive Mode: Not Supported 00:33:58.422 NVM Sets: Not Supported 00:33:58.422 Read Recovery Levels: Not Supported 00:33:58.422 Endurance Groups: Not Supported 00:33:58.422 Predictable Latency Mode: Not Supported 00:33:58.422 Traffic Based Keep ALive: Supported 00:33:58.422 Namespace Granularity: Not Supported 00:33:58.422 SQ Associations: Not Supported 00:33:58.422 UUID List: Not Supported 00:33:58.422 Multi-Domain Subsystem: Not Supported 00:33:58.422 Fixed Capacity Management: Not Supported 00:33:58.422 Variable Capacity Management: Not Supported 00:33:58.422 Delete Endurance Group: Not Supported 00:33:58.422 Delete NVM Set: Not Supported 00:33:58.422 Extended LBA Formats Supported: Not Supported 00:33:58.423 Flexible Data Placement Supported: Not Supported 00:33:58.423 00:33:58.423 Controller Memory Buffer Support 00:33:58.423 ================================ 00:33:58.423 Supported: No 00:33:58.423 00:33:58.423 Persistent Memory Region Support 00:33:58.423 ================================ 00:33:58.423 Supported: No 00:33:58.423 00:33:58.423 Admin Command Set Attributes 00:33:58.423 ============================ 00:33:58.423 Security Send/Receive: Not Supported 00:33:58.423 Format NVM: Not Supported 00:33:58.423 Firmware Activate/Download: Not Supported 00:33:58.423 Namespace Management: Not Supported 00:33:58.423 Device Self-Test: Not Supported 00:33:58.423 Directives: Not Supported 00:33:58.423 NVMe-MI: Not Supported 00:33:58.423 Virtualization Management: Not Supported 00:33:58.423 Doorbell Buffer Config: Not Supported 00:33:58.423 Get LBA Status Capability: Not Supported 00:33:58.423 Command & Feature Lockdown Capability: Not Supported 00:33:58.423 Abort Command Limit: 4 00:33:58.423 Async Event Request Limit: 4 00:33:58.423 Number of Firmware Slots: N/A 00:33:58.423 Firmware Slot 1 Read-Only: N/A 00:33:58.423 Firmware Activation Without Reset: N/A 00:33:58.423 Multiple Update Detection Support: N/A 00:33:58.423 Firmware Update Granularity: No Information Provided 00:33:58.423 Per-Namespace SMART Log: Yes 00:33:58.423 Asymmetric Namespace Access Log Page: Supported 00:33:58.423 ANA Transition Time : 10 sec 00:33:58.423 00:33:58.423 Asymmetric Namespace Access Capabilities 00:33:58.423 ANA Optimized State : Supported 00:33:58.423 ANA Non-Optimized State : Supported 00:33:58.423 ANA Inaccessible State : Supported 00:33:58.423 ANA Persistent Loss State : Supported 00:33:58.423 ANA Change State : Supported 00:33:58.423 ANAGRPID is not changed : No 00:33:58.423 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:58.423 00:33:58.423 ANA Group Identifier Maximum : 128 00:33:58.423 Number of ANA Group Identifiers : 128 00:33:58.423 Max Number of Allowed Namespaces : 1024 00:33:58.423 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:58.423 Command Effects Log Page: Supported 00:33:58.423 Get Log Page Extended Data: Supported 00:33:58.423 Telemetry Log Pages: Not Supported 00:33:58.423 Persistent Event Log Pages: Not Supported 00:33:58.423 Supported Log Pages Log Page: May Support 00:33:58.423 Commands Supported & Effects Log Page: Not Supported 00:33:58.423 Feature Identifiers & Effects Log Page:May Support 00:33:58.423 NVMe-MI Commands & Effects Log Page: May Support 00:33:58.423 Data Area 4 for Telemetry Log: Not Supported 00:33:58.423 Error Log Page Entries Supported: 128 00:33:58.423 Keep Alive: Supported 00:33:58.423 Keep Alive Granularity: 1000 ms 00:33:58.423 00:33:58.423 NVM Command Set Attributes 00:33:58.423 ========================== 00:33:58.423 Submission Queue Entry Size 00:33:58.423 Max: 64 00:33:58.423 Min: 64 00:33:58.423 Completion Queue Entry Size 00:33:58.423 Max: 16 00:33:58.423 Min: 16 00:33:58.423 Number of Namespaces: 1024 00:33:58.423 Compare Command: Not Supported 00:33:58.423 Write Uncorrectable Command: Not Supported 00:33:58.423 Dataset Management Command: Supported 00:33:58.423 Write Zeroes Command: Supported 00:33:58.423 Set Features Save Field: Not Supported 00:33:58.423 Reservations: Not Supported 00:33:58.423 Timestamp: Not Supported 00:33:58.423 Copy: Not Supported 00:33:58.423 Volatile Write Cache: Present 00:33:58.423 Atomic Write Unit (Normal): 1 00:33:58.423 Atomic Write Unit (PFail): 1 00:33:58.423 Atomic Compare & Write Unit: 1 00:33:58.423 Fused Compare & Write: Not Supported 00:33:58.423 Scatter-Gather List 00:33:58.423 SGL Command Set: Supported 00:33:58.423 SGL Keyed: Not Supported 00:33:58.423 SGL Bit Bucket Descriptor: Not Supported 00:33:58.423 SGL Metadata Pointer: Not Supported 00:33:58.423 Oversized SGL: Not Supported 00:33:58.423 SGL Metadata Address: Not Supported 00:33:58.423 SGL Offset: Supported 00:33:58.423 Transport SGL Data Block: Not Supported 00:33:58.423 Replay Protected Memory Block: Not Supported 00:33:58.423 00:33:58.423 Firmware Slot Information 00:33:58.423 ========================= 00:33:58.423 Active slot: 0 00:33:58.423 00:33:58.423 Asymmetric Namespace Access 00:33:58.423 =========================== 00:33:58.423 Change Count : 0 00:33:58.423 Number of ANA Group Descriptors : 1 00:33:58.423 ANA Group Descriptor : 0 00:33:58.423 ANA Group ID : 1 00:33:58.423 Number of NSID Values : 1 00:33:58.423 Change Count : 0 00:33:58.423 ANA State : 1 00:33:58.423 Namespace Identifier : 1 00:33:58.423 00:33:58.423 Commands Supported and Effects 00:33:58.423 ============================== 00:33:58.423 Admin Commands 00:33:58.423 -------------- 00:33:58.423 Get Log Page (02h): Supported 00:33:58.423 Identify (06h): Supported 00:33:58.423 Abort (08h): Supported 00:33:58.423 Set Features (09h): Supported 00:33:58.423 Get Features (0Ah): Supported 00:33:58.423 Asynchronous Event Request (0Ch): Supported 00:33:58.423 Keep Alive (18h): Supported 00:33:58.423 I/O Commands 00:33:58.423 ------------ 00:33:58.423 Flush (00h): Supported 00:33:58.423 Write (01h): Supported LBA-Change 00:33:58.423 Read (02h): Supported 00:33:58.423 Write Zeroes (08h): Supported LBA-Change 00:33:58.423 Dataset Management (09h): Supported 00:33:58.423 00:33:58.423 Error Log 00:33:58.423 ========= 00:33:58.423 Entry: 0 00:33:58.423 Error Count: 0x3 00:33:58.423 Submission Queue Id: 0x0 00:33:58.423 Command Id: 0x5 00:33:58.423 Phase Bit: 0 00:33:58.423 Status Code: 0x2 00:33:58.423 Status Code Type: 0x0 00:33:58.423 Do Not Retry: 1 00:33:58.423 Error Location: 0x28 00:33:58.423 LBA: 0x0 00:33:58.423 Namespace: 0x0 00:33:58.423 Vendor Log Page: 0x0 00:33:58.423 ----------- 00:33:58.423 Entry: 1 00:33:58.423 Error Count: 0x2 00:33:58.423 Submission Queue Id: 0x0 00:33:58.423 Command Id: 0x5 00:33:58.423 Phase Bit: 0 00:33:58.423 Status Code: 0x2 00:33:58.423 Status Code Type: 0x0 00:33:58.423 Do Not Retry: 1 00:33:58.423 Error Location: 0x28 00:33:58.423 LBA: 0x0 00:33:58.423 Namespace: 0x0 00:33:58.423 Vendor Log Page: 0x0 00:33:58.423 ----------- 00:33:58.423 Entry: 2 00:33:58.423 Error Count: 0x1 00:33:58.423 Submission Queue Id: 0x0 00:33:58.423 Command Id: 0x4 00:33:58.423 Phase Bit: 0 00:33:58.423 Status Code: 0x2 00:33:58.423 Status Code Type: 0x0 00:33:58.423 Do Not Retry: 1 00:33:58.423 Error Location: 0x28 00:33:58.423 LBA: 0x0 00:33:58.423 Namespace: 0x0 00:33:58.423 Vendor Log Page: 0x0 00:33:58.423 00:33:58.423 Number of Queues 00:33:58.423 ================ 00:33:58.423 Number of I/O Submission Queues: 128 00:33:58.423 Number of I/O Completion Queues: 128 00:33:58.423 00:33:58.423 ZNS Specific Controller Data 00:33:58.423 ============================ 00:33:58.423 Zone Append Size Limit: 0 00:33:58.423 00:33:58.423 00:33:58.423 Active Namespaces 00:33:58.423 ================= 00:33:58.423 get_feature(0x05) failed 00:33:58.423 Namespace ID:1 00:33:58.423 Command Set Identifier: NVM (00h) 00:33:58.423 Deallocate: Supported 00:33:58.423 Deallocated/Unwritten Error: Not Supported 00:33:58.423 Deallocated Read Value: Unknown 00:33:58.423 Deallocate in Write Zeroes: Not Supported 00:33:58.423 Deallocated Guard Field: 0xFFFF 00:33:58.423 Flush: Supported 00:33:58.423 Reservation: Not Supported 00:33:58.423 Namespace Sharing Capabilities: Multiple Controllers 00:33:58.423 Size (in LBAs): 1953525168 (931GiB) 00:33:58.423 Capacity (in LBAs): 1953525168 (931GiB) 00:33:58.423 Utilization (in LBAs): 1953525168 (931GiB) 00:33:58.423 UUID: 8ba984c7-28a5-4c99-b88c-739596b19f39 00:33:58.423 Thin Provisioning: Not Supported 00:33:58.423 Per-NS Atomic Units: Yes 00:33:58.423 Atomic Boundary Size (Normal): 0 00:33:58.423 Atomic Boundary Size (PFail): 0 00:33:58.423 Atomic Boundary Offset: 0 00:33:58.423 NGUID/EUI64 Never Reused: No 00:33:58.423 ANA group ID: 1 00:33:58.423 Namespace Write Protected: No 00:33:58.423 Number of LBA Formats: 1 00:33:58.423 Current LBA Format: LBA Format #00 00:33:58.423 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:58.423 00:33:58.423 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:58.423 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:58.423 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:58.423 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:58.423 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:58.423 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:58.423 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:58.423 rmmod nvme_tcp 00:33:58.423 rmmod nvme_fabrics 00:33:58.423 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:58.423 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.424 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.957 05:50:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:00.957 05:50:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:00.958 05:50:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:00.958 05:50:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:00.958 05:50:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:00.958 05:50:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:00.958 05:50:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:00.958 05:50:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:00.958 05:50:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:00.958 05:50:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:00.958 05:50:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:03.491 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:03.491 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:04.427 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:04.427 00:34:04.427 real 0m16.302s 00:34:04.427 user 0m4.259s 00:34:04.427 sys 0m8.465s 00:34:04.427 05:50:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.427 05:50:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.427 ************************************ 00:34:04.427 END TEST nvmf_identify_kernel_target 00:34:04.427 ************************************ 00:34:04.427 05:50:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:04.427 05:50:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:04.427 05:50:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.427 05:50:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.427 ************************************ 00:34:04.427 START TEST nvmf_auth_host 00:34:04.427 ************************************ 00:34:04.427 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:04.687 * Looking for test storage... 00:34:04.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:04.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.687 --rc genhtml_branch_coverage=1 00:34:04.687 --rc genhtml_function_coverage=1 00:34:04.687 --rc genhtml_legend=1 00:34:04.687 --rc geninfo_all_blocks=1 00:34:04.687 --rc geninfo_unexecuted_blocks=1 00:34:04.687 00:34:04.687 ' 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:04.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.687 --rc genhtml_branch_coverage=1 00:34:04.687 --rc genhtml_function_coverage=1 00:34:04.687 --rc genhtml_legend=1 00:34:04.687 --rc geninfo_all_blocks=1 00:34:04.687 --rc geninfo_unexecuted_blocks=1 00:34:04.687 00:34:04.687 ' 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:04.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.687 --rc genhtml_branch_coverage=1 00:34:04.687 --rc genhtml_function_coverage=1 00:34:04.687 --rc genhtml_legend=1 00:34:04.687 --rc geninfo_all_blocks=1 00:34:04.687 --rc geninfo_unexecuted_blocks=1 00:34:04.687 00:34:04.687 ' 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:04.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.687 --rc genhtml_branch_coverage=1 00:34:04.687 --rc genhtml_function_coverage=1 00:34:04.687 --rc genhtml_legend=1 00:34:04.687 --rc geninfo_all_blocks=1 00:34:04.687 --rc geninfo_unexecuted_blocks=1 00:34:04.687 00:34:04.687 ' 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.687 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:04.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:04.688 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.253 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:11.254 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:11.254 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:11.254 Found net devices under 0000:af:00.0: cvl_0_0 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:11.254 Found net devices under 0000:af:00.1: cvl_0_1 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:11.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:34:11.254 00:34:11.254 --- 10.0.0.2 ping statistics --- 00:34:11.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.254 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:11.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:34:11.254 00:34:11.254 --- 10.0.0.1 ping statistics --- 00:34:11.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.254 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=513422 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 513422 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 513422 ']' 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:11.254 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=81f3ce64609e004bac3e685700ef2672 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.HMA 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 81f3ce64609e004bac3e685700ef2672 0 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 81f3ce64609e004bac3e685700ef2672 0 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=81f3ce64609e004bac3e685700ef2672 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.HMA 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.HMA 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.HMA 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f173972b69bfbdb6ce32a4345eec05ad3072732eddf6b7f0436e53eeee8c0fea 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.btA 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f173972b69bfbdb6ce32a4345eec05ad3072732eddf6b7f0436e53eeee8c0fea 3 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f173972b69bfbdb6ce32a4345eec05ad3072732eddf6b7f0436e53eeee8c0fea 3 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f173972b69bfbdb6ce32a4345eec05ad3072732eddf6b7f0436e53eeee8c0fea 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.btA 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.btA 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.btA 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8ce35f75a449ea7e41f9fc9ca6c731fb42fe01a5ac4f7e8b 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fbK 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8ce35f75a449ea7e41f9fc9ca6c731fb42fe01a5ac4f7e8b 0 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8ce35f75a449ea7e41f9fc9ca6c731fb42fe01a5ac4f7e8b 0 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8ce35f75a449ea7e41f9fc9ca6c731fb42fe01a5ac4f7e8b 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fbK 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fbK 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.fbK 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2ec3aa8733bbe90e75013672d8458d4906b467e2df2838a0 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yD3 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2ec3aa8733bbe90e75013672d8458d4906b467e2df2838a0 2 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2ec3aa8733bbe90e75013672d8458d4906b467e2df2838a0 2 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2ec3aa8733bbe90e75013672d8458d4906b467e2df2838a0 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yD3 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yD3 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.yD3 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9646ec6699183beb40bcc8459cad2a08 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.4cf 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9646ec6699183beb40bcc8459cad2a08 1 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9646ec6699183beb40bcc8459cad2a08 1 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9646ec6699183beb40bcc8459cad2a08 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:11.255 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.4cf 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.4cf 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.4cf 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f2d6f420e9420220cd80ce0a199a5617 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mdj 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f2d6f420e9420220cd80ce0a199a5617 1 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f2d6f420e9420220cd80ce0a199a5617 1 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f2d6f420e9420220cd80ce0a199a5617 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mdj 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mdj 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.mdj 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.255 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3188e4b2ba8cf2ec1b7001e47b3bc8f4c7f01a723ca63b5b 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vvy 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3188e4b2ba8cf2ec1b7001e47b3bc8f4c7f01a723ca63b5b 2 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3188e4b2ba8cf2ec1b7001e47b3bc8f4c7f01a723ca63b5b 2 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3188e4b2ba8cf2ec1b7001e47b3bc8f4c7f01a723ca63b5b 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vvy 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vvy 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.vvy 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4040d59b4db00d9e9d0670bd84274bfc 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7wu 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4040d59b4db00d9e9d0670bd84274bfc 0 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4040d59b4db00d9e9d0670bd84274bfc 0 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4040d59b4db00d9e9d0670bd84274bfc 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7wu 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7wu 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.7wu 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=926805f32b4e6238894f22980716adc0095c3dd815148fd27d5f844704d2fb4a 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Fx3 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 926805f32b4e6238894f22980716adc0095c3dd815148fd27d5f844704d2fb4a 3 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 926805f32b4e6238894f22980716adc0095c3dd815148fd27d5f844704d2fb4a 3 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=926805f32b4e6238894f22980716adc0095c3dd815148fd27d5f844704d2fb4a 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:11.256 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Fx3 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Fx3 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Fx3 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 513422 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 513422 ']' 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.HMA 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.btA ]] 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.btA 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.fbK 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.yD3 ]] 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yD3 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.515 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.516 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.516 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.4cf 00:34:11.516 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.516 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.516 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.516 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.mdj ]] 00:34:11.516 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mdj 00:34:11.516 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.516 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.vvy 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.7wu ]] 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.7wu 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Fx3 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:11.775 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:14.308 Waiting for block devices as requested 00:34:14.308 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:14.566 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:14.566 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:14.566 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:14.566 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:14.825 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:14.825 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:14.825 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:14.825 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:15.084 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:15.084 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:15.084 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:15.342 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:15.342 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:15.342 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:15.342 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:15.600 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:16.167 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:16.167 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:16.167 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:16.167 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:16.167 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:16.167 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:16.167 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:16.167 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:16.167 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:16.167 No valid GPT data, bailing 00:34:16.167 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:16.167 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:16.167 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:16.167 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:16.167 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:16.167 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:16.167 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:16.167 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:16.167 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:16.168 00:34:16.168 Discovery Log Number of Records 2, Generation counter 2 00:34:16.168 =====Discovery Log Entry 0====== 00:34:16.168 trtype: tcp 00:34:16.168 adrfam: ipv4 00:34:16.168 subtype: current discovery subsystem 00:34:16.168 treq: not specified, sq flow control disable supported 00:34:16.168 portid: 1 00:34:16.168 trsvcid: 4420 00:34:16.168 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:16.168 traddr: 10.0.0.1 00:34:16.168 eflags: none 00:34:16.168 sectype: none 00:34:16.168 =====Discovery Log Entry 1====== 00:34:16.168 trtype: tcp 00:34:16.168 adrfam: ipv4 00:34:16.168 subtype: nvme subsystem 00:34:16.168 treq: not specified, sq flow control disable supported 00:34:16.168 portid: 1 00:34:16.168 trsvcid: 4420 00:34:16.168 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:16.168 traddr: 10.0.0.1 00:34:16.168 eflags: none 00:34:16.168 sectype: none 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.168 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.427 nvme0n1 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.427 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.686 nvme0n1 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.686 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.945 nvme0n1 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.945 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.946 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.205 nvme0n1 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.205 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.464 nvme0n1 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.464 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.465 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.724 nvme0n1 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.724 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.983 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.242 nvme0n1 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:18.242 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.243 nvme0n1 00:34:18.243 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.501 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.502 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.761 nvme0n1 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.761 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.762 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.762 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:18.762 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.762 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.021 nvme0n1 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.021 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.280 nvme0n1 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.280 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.539 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.797 nvme0n1 00:34:19.797 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.797 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.797 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.797 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.797 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.797 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.055 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.314 nvme0n1 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.314 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.572 nvme0n1 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.573 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.831 nvme0n1 00:34:20.831 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.831 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.831 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.831 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.831 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.831 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.831 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.831 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.831 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.831 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.090 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.091 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.091 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.091 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.091 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.091 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.091 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.349 nvme0n1 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.349 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.739 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.740 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.740 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.740 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.999 nvme0n1 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.999 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.000 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.000 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.000 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:23.000 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.000 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.569 nvme0n1 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.569 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.829 nvme0n1 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.829 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.088 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.088 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.088 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.088 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.088 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.088 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.088 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.088 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.088 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.088 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.088 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.347 nvme0n1 00:34:24.347 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.348 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.917 nvme0n1 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.917 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.485 nvme0n1 00:34:25.485 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.485 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.485 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.485 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.485 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.485 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.485 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.485 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.485 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.485 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.485 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.485 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.486 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.053 nvme0n1 00:34:26.053 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.053 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.054 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.054 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.054 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.054 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.313 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:26.313 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.313 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.882 nvme0n1 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.883 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.452 nvme0n1 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.452 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.020 nvme0n1 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:28.020 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.021 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.280 nvme0n1 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.280 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.539 nvme0n1 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.539 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.540 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.540 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.540 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.540 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.540 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.540 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.540 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.540 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.798 nvme0n1 00:34:28.798 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.798 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.798 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.798 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.798 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.798 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.799 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.058 nvme0n1 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.058 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.058 nvme0n1 00:34:29.058 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.058 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.058 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.058 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.058 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.058 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.318 nvme0n1 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.318 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.577 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.578 nvme0n1 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.578 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.837 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.837 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.837 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:29.837 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.837 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.837 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:29.837 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:29.837 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:29.837 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:29.837 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.837 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:29.837 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.838 nvme0n1 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.838 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.097 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.097 nvme0n1 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.097 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.357 nvme0n1 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.357 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.616 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.616 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.616 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.616 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.616 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.616 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.616 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.616 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.616 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.616 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.616 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.617 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.617 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:30.617 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.617 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.876 nvme0n1 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.876 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.136 nvme0n1 00:34:31.136 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.136 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.136 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.136 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.136 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.136 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.136 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.137 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:31.137 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.137 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.396 nvme0n1 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.396 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.656 nvme0n1 00:34:31.656 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.656 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.656 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.656 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.656 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.656 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.656 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.656 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.656 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.656 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.915 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.916 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.916 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.916 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.916 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.916 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:31.916 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.916 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.175 nvme0n1 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.175 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.175 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:32.176 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.176 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.435 nvme0n1 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.435 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.695 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.954 nvme0n1 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.954 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.522 nvme0n1 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:33.522 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.523 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.782 nvme0n1 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.782 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.041 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.300 nvme0n1 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.300 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.301 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:34.301 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.301 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.868 nvme0n1 00:34:34.868 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.868 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.868 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.868 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.868 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.868 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.868 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.868 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.868 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.868 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.127 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:35.128 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.128 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.696 nvme0n1 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.696 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.697 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.264 nvme0n1 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:36.264 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.265 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.833 nvme0n1 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.833 05:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.401 nvme0n1 00:34:37.401 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.661 nvme0n1 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.661 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.920 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.920 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.920 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.920 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.920 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.920 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.920 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.921 nvme0n1 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.921 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.181 05:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.181 nvme0n1 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.181 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.441 nvme0n1 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.441 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.700 nvme0n1 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.700 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.701 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:38.701 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.701 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.960 nvme0n1 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.960 05:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.219 nvme0n1 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.219 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.220 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.479 nvme0n1 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.479 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.739 nvme0n1 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.739 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.740 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.740 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.740 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:39.740 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.740 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.003 nvme0n1 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.003 05:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.263 nvme0n1 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.263 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.523 nvme0n1 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.523 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.783 nvme0n1 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.783 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.043 05:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.303 nvme0n1 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.303 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.563 nvme0n1 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.563 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.131 nvme0n1 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.131 05:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.390 nvme0n1 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.390 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.959 nvme0n1 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.959 05:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.527 nvme0n1 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.528 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.787 nvme0n1 00:34:43.787 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.787 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.787 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.787 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.787 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.787 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODFmM2NlNjQ2MDllMDA0YmFjM2U2ODU3MDBlZjI2NzLJ1G5R: 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: ]] 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE3Mzk3MmI2OWJmYmRiNmNlMzJhNDM0NWVlYzA1YWQzMDcyNzMyZWRkZjZiN2YwNDM2ZTUzZWVlZThjMGZlYWguNT8=: 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.788 05:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.357 nvme0n1 00:34:44.357 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.357 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.357 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.357 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.357 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.357 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.357 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.357 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.357 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.357 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.616 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.617 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.184 nvme0n1 00:34:45.185 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.185 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.185 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.185 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.185 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.185 05:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.185 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.752 nvme0n1 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE4OGU0YjJiYThjZjJlYzFiNzAwMWU0N2IzYmM4ZjRjN2YwMWE3MjNjYTYzYjViWYrIfw==: 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: ]] 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDA0MGQ1OWI0ZGIwMGQ5ZTlkMDY3MGJkODQyNzRiZmM//x32: 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:45.752 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.753 05:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.319 nvme0n1 00:34:46.319 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.319 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.319 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.319 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.319 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.319 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.319 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.319 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.319 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.319 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTI2ODA1ZjMyYjRlNjIzODg5NGYyMjk4MDcxNmFkYzAwOTVjM2RkODE1MTQ4ZmQyN2Q1Zjg0NDcwNGQyZmI0Yf2GzI8=: 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.579 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.149 nvme0n1 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.149 05:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.149 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.149 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:47.149 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:47.149 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:47.149 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:47.149 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:47.149 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:47.149 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:47.149 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:47.149 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.149 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.149 request: 00:34:47.149 { 00:34:47.149 "name": "nvme0", 00:34:47.149 "trtype": "tcp", 00:34:47.149 "traddr": "10.0.0.1", 00:34:47.149 "adrfam": "ipv4", 00:34:47.149 "trsvcid": "4420", 00:34:47.149 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:47.149 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:47.149 "prchk_reftag": false, 00:34:47.150 "prchk_guard": false, 00:34:47.150 "hdgst": false, 00:34:47.150 "ddgst": false, 00:34:47.150 "allow_unrecognized_csi": false, 00:34:47.150 "method": "bdev_nvme_attach_controller", 00:34:47.150 "req_id": 1 00:34:47.150 } 00:34:47.150 Got JSON-RPC error response 00:34:47.150 response: 00:34:47.150 { 00:34:47.150 "code": -5, 00:34:47.150 "message": "Input/output error" 00:34:47.150 } 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.150 request: 00:34:47.150 { 00:34:47.150 "name": "nvme0", 00:34:47.150 "trtype": "tcp", 00:34:47.150 "traddr": "10.0.0.1", 00:34:47.150 "adrfam": "ipv4", 00:34:47.150 "trsvcid": "4420", 00:34:47.150 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:47.150 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:47.150 "prchk_reftag": false, 00:34:47.150 "prchk_guard": false, 00:34:47.150 "hdgst": false, 00:34:47.150 "ddgst": false, 00:34:47.150 "dhchap_key": "key2", 00:34:47.150 "allow_unrecognized_csi": false, 00:34:47.150 "method": "bdev_nvme_attach_controller", 00:34:47.150 "req_id": 1 00:34:47.150 } 00:34:47.150 Got JSON-RPC error response 00:34:47.150 response: 00:34:47.150 { 00:34:47.150 "code": -5, 00:34:47.150 "message": "Input/output error" 00:34:47.150 } 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:47.150 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.410 request: 00:34:47.410 { 00:34:47.410 "name": "nvme0", 00:34:47.410 "trtype": "tcp", 00:34:47.410 "traddr": "10.0.0.1", 00:34:47.410 "adrfam": "ipv4", 00:34:47.410 "trsvcid": "4420", 00:34:47.410 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:47.410 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:47.410 "prchk_reftag": false, 00:34:47.410 "prchk_guard": false, 00:34:47.410 "hdgst": false, 00:34:47.410 "ddgst": false, 00:34:47.410 "dhchap_key": "key1", 00:34:47.410 "dhchap_ctrlr_key": "ckey2", 00:34:47.410 "allow_unrecognized_csi": false, 00:34:47.410 "method": "bdev_nvme_attach_controller", 00:34:47.410 "req_id": 1 00:34:47.410 } 00:34:47.410 Got JSON-RPC error response 00:34:47.410 response: 00:34:47.410 { 00:34:47.410 "code": -5, 00:34:47.410 "message": "Input/output error" 00:34:47.410 } 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.410 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.670 nvme0n1 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.670 request: 00:34:47.670 { 00:34:47.670 "name": "nvme0", 00:34:47.670 "dhchap_key": "key1", 00:34:47.670 "dhchap_ctrlr_key": "ckey2", 00:34:47.670 "method": "bdev_nvme_set_keys", 00:34:47.670 "req_id": 1 00:34:47.670 } 00:34:47.670 Got JSON-RPC error response 00:34:47.670 response: 00:34:47.670 { 00:34:47.670 "code": -13, 00:34:47.670 "message": "Permission denied" 00:34:47.670 } 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:47.670 05:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:49.049 05:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.049 05:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:49.049 05:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.049 05:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.049 05:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.049 05:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:49.049 05:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNlMzVmNzVhNDQ5ZWE3ZTQxZjlmYzljYTZjNzMxZmI0MmZlMDFhNWFjNGY3ZThiZxhlrg==: 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: ]] 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjM2FhODczM2JiZTkwZTc1MDEzNjcyZDg0NThkNDkwNmI0NjdlMmRmMjgzOGEwB4HHQQ==: 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.987 nvme0n1 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY0NmVjNjY5OTE4M2JlYjQwYmNjODQ1OWNhZDJhMDjrZMvV: 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: ]] 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJkNmY0MjBlOTQyMDIyMGNkODBjZTBhMTk5YTU2MTfJEidO: 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:49.987 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.988 request: 00:34:49.988 { 00:34:49.988 "name": "nvme0", 00:34:49.988 "dhchap_key": "key2", 00:34:49.988 "dhchap_ctrlr_key": "ckey1", 00:34:49.988 "method": "bdev_nvme_set_keys", 00:34:49.988 "req_id": 1 00:34:49.988 } 00:34:49.988 Got JSON-RPC error response 00:34:49.988 response: 00:34:49.988 { 00:34:49.988 "code": -13, 00:34:49.988 "message": "Permission denied" 00:34:49.988 } 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.988 05:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.247 05:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:50.247 05:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:51.183 rmmod nvme_tcp 00:34:51.183 rmmod nvme_fabrics 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 513422 ']' 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 513422 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 513422 ']' 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 513422 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 513422 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 513422' 00:34:51.183 killing process with pid 513422 00:34:51.183 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 513422 00:34:51.184 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 513422 00:34:51.443 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:51.443 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:51.443 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:51.443 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:51.443 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:51.443 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:51.443 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:51.443 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:51.443 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:51.443 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.443 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.443 05:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.981 05:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:53.981 05:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:53.981 05:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:53.981 05:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:53.981 05:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:53.981 05:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:53.981 05:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:53.981 05:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:53.981 05:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:53.982 05:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:53.982 05:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:53.982 05:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:53.982 05:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:56.521 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:56.521 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:57.460 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:57.460 05:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.HMA /tmp/spdk.key-null.fbK /tmp/spdk.key-sha256.4cf /tmp/spdk.key-sha384.vvy /tmp/spdk.key-sha512.Fx3 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:57.460 05:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:59.997 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:59.998 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:59.998 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:00.257 00:35:00.257 real 0m55.745s 00:35:00.257 user 0m50.641s 00:35:00.257 sys 0m12.503s 00:35:00.257 05:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:00.257 05:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.257 ************************************ 00:35:00.257 END TEST nvmf_auth_host 00:35:00.257 ************************************ 00:35:00.257 05:51:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:00.257 05:51:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:00.257 05:51:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:00.257 05:51:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:00.257 05:51:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.257 ************************************ 00:35:00.257 START TEST nvmf_digest 00:35:00.257 ************************************ 00:35:00.257 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:00.257 * Looking for test storage... 00:35:00.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:00.517 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:00.517 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:35:00.517 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:00.517 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:00.517 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:00.517 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:00.517 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:00.517 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:00.517 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:00.517 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:00.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.518 --rc genhtml_branch_coverage=1 00:35:00.518 --rc genhtml_function_coverage=1 00:35:00.518 --rc genhtml_legend=1 00:35:00.518 --rc geninfo_all_blocks=1 00:35:00.518 --rc geninfo_unexecuted_blocks=1 00:35:00.518 00:35:00.518 ' 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:00.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.518 --rc genhtml_branch_coverage=1 00:35:00.518 --rc genhtml_function_coverage=1 00:35:00.518 --rc genhtml_legend=1 00:35:00.518 --rc geninfo_all_blocks=1 00:35:00.518 --rc geninfo_unexecuted_blocks=1 00:35:00.518 00:35:00.518 ' 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:00.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.518 --rc genhtml_branch_coverage=1 00:35:00.518 --rc genhtml_function_coverage=1 00:35:00.518 --rc genhtml_legend=1 00:35:00.518 --rc geninfo_all_blocks=1 00:35:00.518 --rc geninfo_unexecuted_blocks=1 00:35:00.518 00:35:00.518 ' 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:00.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.518 --rc genhtml_branch_coverage=1 00:35:00.518 --rc genhtml_function_coverage=1 00:35:00.518 --rc genhtml_legend=1 00:35:00.518 --rc geninfo_all_blocks=1 00:35:00.518 --rc geninfo_unexecuted_blocks=1 00:35:00.518 00:35:00.518 ' 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:00.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.518 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:00.519 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:00.519 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:00.519 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:07.094 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:07.094 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:07.094 Found net devices under 0000:af:00.0: cvl_0_0 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:07.094 Found net devices under 0000:af:00.1: cvl_0_1 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:07.094 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:07.095 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:07.095 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:07.095 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:07.095 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:07.095 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:07.095 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:07.095 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:07.095 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:07.095 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:07.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:07.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:35:07.095 00:35:07.095 --- 10.0.0.2 ping statistics --- 00:35:07.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.095 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:07.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:07.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:35:07.095 00:35:07.095 --- 10.0.0.1 ping statistics --- 00:35:07.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.095 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:07.095 ************************************ 00:35:07.095 START TEST nvmf_digest_clean 00:35:07.095 ************************************ 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=527429 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 527429 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 527429 ']' 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:07.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:07.095 [2024-12-13 05:51:06.402063] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:07.095 [2024-12-13 05:51:06.402109] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:07.095 [2024-12-13 05:51:06.481418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.095 [2024-12-13 05:51:06.502943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:07.095 [2024-12-13 05:51:06.502978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:07.095 [2024-12-13 05:51:06.502985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:07.095 [2024-12-13 05:51:06.502991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:07.095 [2024-12-13 05:51:06.502996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:07.095 [2024-12-13 05:51:06.503528] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:07.095 null0 00:35:07.095 [2024-12-13 05:51:06.683587] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:07.095 [2024-12-13 05:51:06.707791] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=527496 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 527496 /var/tmp/bperf.sock 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 527496 ']' 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:07.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:07.095 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:07.095 [2024-12-13 05:51:06.760279] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:07.096 [2024-12-13 05:51:06.760321] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527496 ] 00:35:07.096 [2024-12-13 05:51:06.835638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.096 [2024-12-13 05:51:06.858895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.096 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:07.096 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:07.096 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:07.096 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:07.096 05:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:07.355 05:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:07.355 05:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:07.614 nvme0n1 00:35:07.614 05:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:07.614 05:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:07.873 Running I/O for 2 seconds... 00:35:09.749 25652.00 IOPS, 100.20 MiB/s [2024-12-13T04:51:09.764Z] 25537.00 IOPS, 99.75 MiB/s 00:35:09.749 Latency(us) 00:35:09.749 [2024-12-13T04:51:09.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.749 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:09.749 nvme0n1 : 2.00 25545.15 99.79 0.00 0.00 5005.93 2574.63 11546.82 00:35:09.749 [2024-12-13T04:51:09.764Z] =================================================================================================================== 00:35:09.749 [2024-12-13T04:51:09.764Z] Total : 25545.15 99.79 0.00 0.00 5005.93 2574.63 11546.82 00:35:09.749 { 00:35:09.749 "results": [ 00:35:09.749 { 00:35:09.749 "job": "nvme0n1", 00:35:09.749 "core_mask": "0x2", 00:35:09.749 "workload": "randread", 00:35:09.749 "status": "finished", 00:35:09.749 "queue_depth": 128, 00:35:09.749 "io_size": 4096, 00:35:09.749 "runtime": 2.004373, 00:35:09.749 "iops": 25545.14553927837, 00:35:09.749 "mibps": 99.78572476280613, 00:35:09.749 "io_failed": 0, 00:35:09.749 "io_timeout": 0, 00:35:09.749 "avg_latency_us": 5005.925751263437, 00:35:09.749 "min_latency_us": 2574.6285714285714, 00:35:09.749 "max_latency_us": 11546.819047619048 00:35:09.749 } 00:35:09.749 ], 00:35:09.749 "core_count": 1 00:35:09.749 } 00:35:09.749 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:09.749 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:09.749 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:09.749 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:09.749 | select(.opcode=="crc32c") 00:35:09.749 | "\(.module_name) \(.executed)"' 00:35:09.749 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 527496 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 527496 ']' 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 527496 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527496 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527496' 00:35:10.008 killing process with pid 527496 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 527496 00:35:10.008 Received shutdown signal, test time was about 2.000000 seconds 00:35:10.008 00:35:10.008 Latency(us) 00:35:10.008 [2024-12-13T04:51:10.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.008 [2024-12-13T04:51:10.023Z] =================================================================================================================== 00:35:10.008 [2024-12-13T04:51:10.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:10.008 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 527496 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=528322 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 528322 /var/tmp/bperf.sock 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 528322 ']' 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:10.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:10.268 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:10.268 [2024-12-13 05:51:10.188781] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:10.268 [2024-12-13 05:51:10.188830] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528322 ] 00:35:10.268 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:10.268 Zero copy mechanism will not be used. 00:35:10.268 [2024-12-13 05:51:10.264258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.528 [2024-12-13 05:51:10.286984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:10.528 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:10.528 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:10.528 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:10.528 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:10.528 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:10.788 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:10.788 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:11.047 nvme0n1 00:35:11.048 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:11.048 05:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:11.048 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:11.048 Zero copy mechanism will not be used. 00:35:11.048 Running I/O for 2 seconds... 00:35:13.364 5502.00 IOPS, 687.75 MiB/s [2024-12-13T04:51:13.379Z] 5622.00 IOPS, 702.75 MiB/s 00:35:13.364 Latency(us) 00:35:13.364 [2024-12-13T04:51:13.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.364 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:13.364 nvme0n1 : 2.00 5625.32 703.17 0.00 0.00 2841.61 647.56 9175.04 00:35:13.364 [2024-12-13T04:51:13.379Z] =================================================================================================================== 00:35:13.364 [2024-12-13T04:51:13.379Z] Total : 5625.32 703.17 0.00 0.00 2841.61 647.56 9175.04 00:35:13.364 { 00:35:13.364 "results": [ 00:35:13.364 { 00:35:13.364 "job": "nvme0n1", 00:35:13.364 "core_mask": "0x2", 00:35:13.364 "workload": "randread", 00:35:13.364 "status": "finished", 00:35:13.364 "queue_depth": 16, 00:35:13.364 "io_size": 131072, 00:35:13.364 "runtime": 2.001663, 00:35:13.364 "iops": 5625.322544304411, 00:35:13.364 "mibps": 703.1653180380514, 00:35:13.364 "io_failed": 0, 00:35:13.364 "io_timeout": 0, 00:35:13.364 "avg_latency_us": 2841.6090941385437, 00:35:13.364 "min_latency_us": 647.5580952380952, 00:35:13.364 "max_latency_us": 9175.04 00:35:13.364 } 00:35:13.364 ], 00:35:13.364 "core_count": 1 00:35:13.364 } 00:35:13.364 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:13.364 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:13.364 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:13.364 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:13.364 | select(.opcode=="crc32c") 00:35:13.364 | "\(.module_name) \(.executed)"' 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 528322 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 528322 ']' 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 528322 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528322 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528322' 00:35:13.365 killing process with pid 528322 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 528322 00:35:13.365 Received shutdown signal, test time was about 2.000000 seconds 00:35:13.365 00:35:13.365 Latency(us) 00:35:13.365 [2024-12-13T04:51:13.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.365 [2024-12-13T04:51:13.380Z] =================================================================================================================== 00:35:13.365 [2024-12-13T04:51:13.380Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:13.365 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 528322 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=528939 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 528939 /var/tmp/bperf.sock 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 528939 ']' 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:13.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:13.625 [2024-12-13 05:51:13.469837] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:13.625 [2024-12-13 05:51:13.469885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528939 ] 00:35:13.625 [2024-12-13 05:51:13.544915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.625 [2024-12-13 05:51:13.567469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:13.625 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:13.884 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:13.884 05:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:14.460 nvme0n1 00:35:14.460 05:51:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:14.460 05:51:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:14.460 Running I/O for 2 seconds... 00:35:16.335 28573.00 IOPS, 111.61 MiB/s [2024-12-13T04:51:16.350Z] 28588.50 IOPS, 111.67 MiB/s 00:35:16.335 Latency(us) 00:35:16.335 [2024-12-13T04:51:16.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.335 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:16.335 nvme0n1 : 2.01 28591.45 111.69 0.00 0.00 4470.51 2122.12 10048.85 00:35:16.335 [2024-12-13T04:51:16.350Z] =================================================================================================================== 00:35:16.335 [2024-12-13T04:51:16.350Z] Total : 28591.45 111.69 0.00 0.00 4470.51 2122.12 10048.85 00:35:16.335 { 00:35:16.335 "results": [ 00:35:16.335 { 00:35:16.335 "job": "nvme0n1", 00:35:16.335 "core_mask": "0x2", 00:35:16.335 "workload": "randwrite", 00:35:16.335 "status": "finished", 00:35:16.335 "queue_depth": 128, 00:35:16.335 "io_size": 4096, 00:35:16.335 "runtime": 2.006474, 00:35:16.335 "iops": 28591.449478039587, 00:35:16.335 "mibps": 111.68534952359214, 00:35:16.335 "io_failed": 0, 00:35:16.335 "io_timeout": 0, 00:35:16.335 "avg_latency_us": 4470.512236189414, 00:35:16.335 "min_latency_us": 2122.118095238095, 00:35:16.335 "max_latency_us": 10048.853333333333 00:35:16.335 } 00:35:16.335 ], 00:35:16.335 "core_count": 1 00:35:16.335 } 00:35:16.594 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:16.594 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:16.594 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:16.594 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:16.595 | select(.opcode=="crc32c") 00:35:16.595 | "\(.module_name) \(.executed)"' 00:35:16.595 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:16.595 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:16.595 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:16.595 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:16.595 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:16.595 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 528939 00:35:16.595 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 528939 ']' 00:35:16.595 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 528939 00:35:16.595 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:16.595 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:16.595 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528939 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528939' 00:35:16.854 killing process with pid 528939 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 528939 00:35:16.854 Received shutdown signal, test time was about 2.000000 seconds 00:35:16.854 00:35:16.854 Latency(us) 00:35:16.854 [2024-12-13T04:51:16.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.854 [2024-12-13T04:51:16.869Z] =================================================================================================================== 00:35:16.854 [2024-12-13T04:51:16.869Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 528939 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=529471 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 529471 /var/tmp/bperf.sock 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 529471 ']' 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:16.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:16.854 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:16.854 [2024-12-13 05:51:16.811654] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:16.854 [2024-12-13 05:51:16.811701] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529471 ] 00:35:16.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:16.854 Zero copy mechanism will not be used. 00:35:17.113 [2024-12-13 05:51:16.887862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.113 [2024-12-13 05:51:16.910174] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.113 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:17.113 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:17.113 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:17.113 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:17.113 05:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:17.373 05:51:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:17.373 05:51:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:17.632 nvme0n1 00:35:17.632 05:51:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:17.632 05:51:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:17.632 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:17.632 Zero copy mechanism will not be used. 00:35:17.632 Running I/O for 2 seconds... 00:35:19.948 7050.00 IOPS, 881.25 MiB/s [2024-12-13T04:51:19.963Z] 7092.00 IOPS, 886.50 MiB/s 00:35:19.948 Latency(us) 00:35:19.948 [2024-12-13T04:51:19.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.948 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:19.948 nvme0n1 : 2.00 7091.29 886.41 0.00 0.00 2252.41 1396.54 8550.89 00:35:19.948 [2024-12-13T04:51:19.963Z] =================================================================================================================== 00:35:19.948 [2024-12-13T04:51:19.963Z] Total : 7091.29 886.41 0.00 0.00 2252.41 1396.54 8550.89 00:35:19.948 { 00:35:19.948 "results": [ 00:35:19.948 { 00:35:19.948 "job": "nvme0n1", 00:35:19.948 "core_mask": "0x2", 00:35:19.948 "workload": "randwrite", 00:35:19.948 "status": "finished", 00:35:19.949 "queue_depth": 16, 00:35:19.949 "io_size": 131072, 00:35:19.949 "runtime": 2.003162, 00:35:19.949 "iops": 7091.288672608606, 00:35:19.949 "mibps": 886.4110840760758, 00:35:19.949 "io_failed": 0, 00:35:19.949 "io_timeout": 0, 00:35:19.949 "avg_latency_us": 2252.4107286166845, 00:35:19.949 "min_latency_us": 1396.5409523809524, 00:35:19.949 "max_latency_us": 8550.887619047619 00:35:19.949 } 00:35:19.949 ], 00:35:19.949 "core_count": 1 00:35:19.949 } 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:19.949 | select(.opcode=="crc32c") 00:35:19.949 | "\(.module_name) \(.executed)"' 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 529471 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 529471 ']' 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 529471 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 529471 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 529471' 00:35:19.949 killing process with pid 529471 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 529471 00:35:19.949 Received shutdown signal, test time was about 2.000000 seconds 00:35:19.949 00:35:19.949 Latency(us) 00:35:19.949 [2024-12-13T04:51:19.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.949 [2024-12-13T04:51:19.964Z] =================================================================================================================== 00:35:19.949 [2024-12-13T04:51:19.964Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:19.949 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 529471 00:35:20.209 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 527429 00:35:20.209 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 527429 ']' 00:35:20.209 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 527429 00:35:20.209 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:20.209 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.209 05:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527429 00:35:20.209 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:20.209 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:20.209 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527429' 00:35:20.209 killing process with pid 527429 00:35:20.209 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 527429 00:35:20.209 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 527429 00:35:20.209 00:35:20.209 real 0m13.838s 00:35:20.209 user 0m26.381s 00:35:20.209 sys 0m4.585s 00:35:20.209 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:20.209 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:20.209 ************************************ 00:35:20.209 END TEST nvmf_digest_clean 00:35:20.209 ************************************ 00:35:20.209 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:20.209 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:20.209 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:20.209 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:20.469 ************************************ 00:35:20.469 START TEST nvmf_digest_error 00:35:20.469 ************************************ 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=529963 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 529963 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 529963 ']' 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:20.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:20.469 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.469 [2024-12-13 05:51:20.318047] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:20.469 [2024-12-13 05:51:20.318093] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:20.469 [2024-12-13 05:51:20.396095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.469 [2024-12-13 05:51:20.416244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:20.469 [2024-12-13 05:51:20.416278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:20.469 [2024-12-13 05:51:20.416285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:20.469 [2024-12-13 05:51:20.416291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:20.469 [2024-12-13 05:51:20.416296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:20.469 [2024-12-13 05:51:20.416833] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.729 [2024-12-13 05:51:20.529381] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.729 null0 00:35:20.729 [2024-12-13 05:51:20.617060] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:20.729 [2024-12-13 05:51:20.641251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=530164 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 530164 /var/tmp/bperf.sock 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 530164 ']' 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:20.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:20.729 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.729 [2024-12-13 05:51:20.693872] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:20.729 [2024-12-13 05:51:20.693913] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530164 ] 00:35:20.989 [2024-12-13 05:51:20.768364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.989 [2024-12-13 05:51:20.790774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.989 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:20.989 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:20.989 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:20.989 05:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:21.248 05:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:21.248 05:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.248 05:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.248 05:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.248 05:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:21.248 05:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:21.508 nvme0n1 00:35:21.508 05:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:21.508 05:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.508 05:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.508 05:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.508 05:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:21.508 05:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:21.508 Running I/O for 2 seconds... 00:35:21.508 [2024-12-13 05:51:21.492960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.508 [2024-12-13 05:51:21.492994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.508 [2024-12-13 05:51:21.493004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.508 [2024-12-13 05:51:21.504265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.508 [2024-12-13 05:51:21.504286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.508 [2024-12-13 05:51:21.504295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.508 [2024-12-13 05:51:21.516833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.508 [2024-12-13 05:51:21.516853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.508 [2024-12-13 05:51:21.516861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.526341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.526360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.526369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.533951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.533970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.533978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.544485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.544504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.544512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.555484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.555504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.555512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.567602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.567622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.567630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.576413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.576432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.576441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.587798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.587817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.587826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.596885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.596903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.596911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.606056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.606075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.606083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.615061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.615079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.615087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.624167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.624185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.624193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.633444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.633469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.633477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.643909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.643927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.643935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.654755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.654774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.654782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.663421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.663439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.663455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.675163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.675182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.675189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.686669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.686687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.686695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.695433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.769 [2024-12-13 05:51:21.695456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.769 [2024-12-13 05:51:21.695464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.769 [2024-12-13 05:51:21.706404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.770 [2024-12-13 05:51:21.706422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.770 [2024-12-13 05:51:21.706430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.770 [2024-12-13 05:51:21.717509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.770 [2024-12-13 05:51:21.717530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.770 [2024-12-13 05:51:21.717537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.770 [2024-12-13 05:51:21.726316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.770 [2024-12-13 05:51:21.726335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.770 [2024-12-13 05:51:21.726343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.770 [2024-12-13 05:51:21.736376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.770 [2024-12-13 05:51:21.736396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.770 [2024-12-13 05:51:21.736404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.770 [2024-12-13 05:51:21.744349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.770 [2024-12-13 05:51:21.744370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.770 [2024-12-13 05:51:21.744377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.770 [2024-12-13 05:51:21.757249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.770 [2024-12-13 05:51:21.757273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.770 [2024-12-13 05:51:21.757282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.770 [2024-12-13 05:51:21.768754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.770 [2024-12-13 05:51:21.768773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.770 [2024-12-13 05:51:21.768781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.770 [2024-12-13 05:51:21.779898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:21.770 [2024-12-13 05:51:21.779918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.770 [2024-12-13 05:51:21.779926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.030 [2024-12-13 05:51:21.788803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.030 [2024-12-13 05:51:21.788822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.030 [2024-12-13 05:51:21.788830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.030 [2024-12-13 05:51:21.799745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.030 [2024-12-13 05:51:21.799764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.030 [2024-12-13 05:51:21.799772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.030 [2024-12-13 05:51:21.810732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.030 [2024-12-13 05:51:21.810751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.030 [2024-12-13 05:51:21.810758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.030 [2024-12-13 05:51:21.821414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.030 [2024-12-13 05:51:21.821434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.030 [2024-12-13 05:51:21.821442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.030 [2024-12-13 05:51:21.831561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.030 [2024-12-13 05:51:21.831580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.030 [2024-12-13 05:51:21.831588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.030 [2024-12-13 05:51:21.841133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.030 [2024-12-13 05:51:21.841152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.030 [2024-12-13 05:51:21.841166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.030 [2024-12-13 05:51:21.849379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.030 [2024-12-13 05:51:21.849398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.030 [2024-12-13 05:51:21.849405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.859124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.859143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.859150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.870729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.870749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.870757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.879062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.879081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.879088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.890147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.890167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.890174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.901176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.901195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.901204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.909652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.909671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.909679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.921649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.921668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.921676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.932064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.932086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.932094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.944371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.944392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.944400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.952370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.952389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.952397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.963869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.963889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.963897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.972005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.972023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.972031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.982903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.982923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.982932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:21.993233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:21.993251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:21.993259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:22.001616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:22.001634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:22.001642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:22.013054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:22.013073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:22.013081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:22.025244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:22.025263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:22.025271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:22.033446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:22.033472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:22.033480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.031 [2024-12-13 05:51:22.043608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.031 [2024-12-13 05:51:22.043628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.031 [2024-12-13 05:51:22.043637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.053372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.053392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.053399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.065058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.065077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.065085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.073620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.073640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.073648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.082394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.082415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.082422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.093147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.093167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.093176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.102437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.102464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.102476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.112146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.112165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.112173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.122136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.122154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.122162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.132392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.132411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.132419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.141225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.141243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.141251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.151090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.151108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.151116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.161080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.161099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.161108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.169069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.169088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.169095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.180201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.180220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.180228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.189765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.189787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.189795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.197951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.197969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.197977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.209803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.209821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.209829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.218108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.218126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.218133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.229464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.229482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.229489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.239820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.239839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.239847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.249138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.249157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.249164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.258948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.258966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.258974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.268730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.268748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.268756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.277522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.277540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.277548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.286716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.286734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.286741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.295886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.295904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.292 [2024-12-13 05:51:22.295912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.292 [2024-12-13 05:51:22.305142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.292 [2024-12-13 05:51:22.305160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.293 [2024-12-13 05:51:22.305169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.553 [2024-12-13 05:51:22.315184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.553 [2024-12-13 05:51:22.315203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.553 [2024-12-13 05:51:22.315210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.553 [2024-12-13 05:51:22.323214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.553 [2024-12-13 05:51:22.323232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.553 [2024-12-13 05:51:22.323239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.553 [2024-12-13 05:51:22.332294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.553 [2024-12-13 05:51:22.332313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.553 [2024-12-13 05:51:22.332321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.553 [2024-12-13 05:51:22.343347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.553 [2024-12-13 05:51:22.343366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.553 [2024-12-13 05:51:22.343374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.553 [2024-12-13 05:51:22.352339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.553 [2024-12-13 05:51:22.352356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.553 [2024-12-13 05:51:22.352367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.553 [2024-12-13 05:51:22.360713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.553 [2024-12-13 05:51:22.360732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.553 [2024-12-13 05:51:22.360739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.553 [2024-12-13 05:51:22.372160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.553 [2024-12-13 05:51:22.372179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.553 [2024-12-13 05:51:22.372187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.553 [2024-12-13 05:51:22.381684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.381702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.381710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.391483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.391502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.391510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.400662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.400680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.400688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.409896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.409915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.409923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.419052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.419070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.419077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.428233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.428252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.428259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.438412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.438431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.438439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.447483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.447501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.447509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.455115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.455133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.455141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.466485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.466505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.466513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 25470.00 IOPS, 99.49 MiB/s [2024-12-13T04:51:22.569Z] [2024-12-13 05:51:22.477200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.477219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.477227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.485634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.485652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.485660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.496310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.496328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.496336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.504394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.504413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.504421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.513932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.513950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.513960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.524255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.524273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.524281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.533734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.533752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.533760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.544160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.544178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.544186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.556233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.556252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.556260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.554 [2024-12-13 05:51:22.568040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.554 [2024-12-13 05:51:22.568059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.554 [2024-12-13 05:51:22.568066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.577188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.577207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.577214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.589587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.589606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.589614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.601616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.601633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.601641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.611590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.611612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.611620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.624014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.624032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.624039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.633203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.633222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.633230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.641665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.641684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.641692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.651857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.651875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.651883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.663842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.663862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.663870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.674887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.674906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.674914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.683750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.683769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.683776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.693314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.693332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.693339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.702408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.702427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.702434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.712375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.712393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.712401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.720661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.720680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.720688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.731077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.731096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.731104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.739282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.739300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.739313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.750121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.750139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.750146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.760133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.760151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.760159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.767899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.767917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.767924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.779769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.779790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.779798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.790803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.790821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.790829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.802955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.802974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.802982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.811952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.811970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.811977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.815 [2024-12-13 05:51:22.823270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:22.815 [2024-12-13 05:51:22.823287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.815 [2024-12-13 05:51:22.823295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.836023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.836042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.836050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.845980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.845998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.846005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.855653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.855671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.855679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.863799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.863819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.863827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.874626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.874645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.874653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.882687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.882706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.882713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.894539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.894557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.894565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.906637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.906656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.906664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.915576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.915594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.915602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.923510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.923528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.923536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.936002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.936024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.936032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.945747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.945764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.945772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.954464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.954483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.954493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.963978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.963997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.964005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.972900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.972919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.972927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.983872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.983890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.983898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:22.996263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:22.996281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:22.996289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:23.008468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:23.008486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:23.008494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:23.021142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:23.021161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:23.021169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:23.030862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:23.030880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:23.030888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:23.039419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.076 [2024-12-13 05:51:23.039438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.076 [2024-12-13 05:51:23.039445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.076 [2024-12-13 05:51:23.049418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.077 [2024-12-13 05:51:23.049440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.077 [2024-12-13 05:51:23.049453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.077 [2024-12-13 05:51:23.058504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.077 [2024-12-13 05:51:23.058523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.077 [2024-12-13 05:51:23.058530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.077 [2024-12-13 05:51:23.068009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.077 [2024-12-13 05:51:23.068027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.077 [2024-12-13 05:51:23.068035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.077 [2024-12-13 05:51:23.076388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.077 [2024-12-13 05:51:23.076406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.077 [2024-12-13 05:51:23.076414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.077 [2024-12-13 05:51:23.086968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.077 [2024-12-13 05:51:23.086989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.077 [2024-12-13 05:51:23.086997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.096742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.096762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.096770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.104949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.104969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.104977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.116740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.116759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.116767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.126602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.126621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.126628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.136138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.136157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.136165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.147422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.147443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.147457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.158022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.158040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.158048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.166559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.166579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.166587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.177721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.177741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.177749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.186171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.186191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.186200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.197898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.197917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.197925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.206154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.206172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.206180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.217890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.217909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.217920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.227554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.227573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.227581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.238719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.238738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.238745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.250915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.250935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.250942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.260663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.260681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.337 [2024-12-13 05:51:23.260689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.337 [2024-12-13 05:51:23.269168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.337 [2024-12-13 05:51:23.269186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-12-13 05:51:23.269194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.338 [2024-12-13 05:51:23.280093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.338 [2024-12-13 05:51:23.280112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-12-13 05:51:23.280119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.338 [2024-12-13 05:51:23.288656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.338 [2024-12-13 05:51:23.288686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-12-13 05:51:23.288694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.338 [2024-12-13 05:51:23.297984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.338 [2024-12-13 05:51:23.298002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-12-13 05:51:23.298010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.338 [2024-12-13 05:51:23.307004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.338 [2024-12-13 05:51:23.307022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-12-13 05:51:23.307030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.338 [2024-12-13 05:51:23.319208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.338 [2024-12-13 05:51:23.319226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-12-13 05:51:23.319234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.338 [2024-12-13 05:51:23.327855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.338 [2024-12-13 05:51:23.327874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-12-13 05:51:23.327882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.338 [2024-12-13 05:51:23.339039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.338 [2024-12-13 05:51:23.339058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-12-13 05:51:23.339066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.338 [2024-12-13 05:51:23.348488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.338 [2024-12-13 05:51:23.348507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.338 [2024-12-13 05:51:23.348515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.598 [2024-12-13 05:51:23.357887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.598 [2024-12-13 05:51:23.357906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.598 [2024-12-13 05:51:23.357914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.598 [2024-12-13 05:51:23.367061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.598 [2024-12-13 05:51:23.367081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.598 [2024-12-13 05:51:23.367090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.598 [2024-12-13 05:51:23.377655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.598 [2024-12-13 05:51:23.377675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.598 [2024-12-13 05:51:23.377682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.598 [2024-12-13 05:51:23.388652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.598 [2024-12-13 05:51:23.388670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.598 [2024-12-13 05:51:23.388681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.598 [2024-12-13 05:51:23.396859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.598 [2024-12-13 05:51:23.396878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.598 [2024-12-13 05:51:23.396886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.598 [2024-12-13 05:51:23.408364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.598 [2024-12-13 05:51:23.408383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.598 [2024-12-13 05:51:23.408391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.598 [2024-12-13 05:51:23.418592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.598 [2024-12-13 05:51:23.418610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.598 [2024-12-13 05:51:23.418617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.598 [2024-12-13 05:51:23.426889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.598 [2024-12-13 05:51:23.426907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.598 [2024-12-13 05:51:23.426914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.598 [2024-12-13 05:51:23.439429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.598 [2024-12-13 05:51:23.439453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.598 [2024-12-13 05:51:23.439462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.598 [2024-12-13 05:51:23.451344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.598 [2024-12-13 05:51:23.451363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.598 [2024-12-13 05:51:23.451371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.599 [2024-12-13 05:51:23.460777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.599 [2024-12-13 05:51:23.460798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.599 [2024-12-13 05:51:23.460806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.599 [2024-12-13 05:51:23.470427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.599 [2024-12-13 05:51:23.470451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.599 [2024-12-13 05:51:23.470460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.599 25318.00 IOPS, 98.90 MiB/s [2024-12-13T04:51:23.614Z] [2024-12-13 05:51:23.479503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24e1990) 00:35:23.599 [2024-12-13 05:51:23.479526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.599 [2024-12-13 05:51:23.479534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.599 00:35:23.599 Latency(us) 00:35:23.599 [2024-12-13T04:51:23.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.599 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:23.599 nvme0n1 : 2.04 24823.60 96.97 0.00 0.00 5048.24 2402.99 45687.95 00:35:23.599 [2024-12-13T04:51:23.614Z] =================================================================================================================== 00:35:23.599 [2024-12-13T04:51:23.614Z] Total : 24823.60 96.97 0.00 0.00 5048.24 2402.99 45687.95 00:35:23.599 { 00:35:23.599 "results": [ 00:35:23.599 { 00:35:23.599 "job": "nvme0n1", 00:35:23.599 "core_mask": "0x2", 00:35:23.599 "workload": "randread", 00:35:23.599 "status": "finished", 00:35:23.599 "queue_depth": 128, 00:35:23.599 "io_size": 4096, 00:35:23.599 "runtime": 2.044305, 00:35:23.599 "iops": 24823.595305005856, 00:35:23.599 "mibps": 96.96716916017913, 00:35:23.599 "io_failed": 0, 00:35:23.599 "io_timeout": 0, 00:35:23.599 "avg_latency_us": 5048.24379240809, 00:35:23.599 "min_latency_us": 2402.9866666666667, 00:35:23.599 "max_latency_us": 45687.95428571429 00:35:23.599 } 00:35:23.599 ], 00:35:23.599 "core_count": 1 00:35:23.599 } 00:35:23.599 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:23.599 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:23.599 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:23.599 | .driver_specific 00:35:23.599 | .nvme_error 00:35:23.599 | .status_code 00:35:23.599 | .command_transient_transport_error' 00:35:23.599 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:23.859 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 199 > 0 )) 00:35:23.859 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 530164 00:35:23.859 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 530164 ']' 00:35:23.859 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 530164 00:35:23.859 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:23.859 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.859 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530164 00:35:23.859 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:23.859 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:23.859 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530164' 00:35:23.859 killing process with pid 530164 00:35:23.859 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 530164 00:35:23.859 Received shutdown signal, test time was about 2.000000 seconds 00:35:23.859 00:35:23.859 Latency(us) 00:35:23.859 [2024-12-13T04:51:23.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.859 [2024-12-13T04:51:23.874Z] =================================================================================================================== 00:35:23.859 [2024-12-13T04:51:23.874Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:23.859 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 530164 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=530655 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 530655 /var/tmp/bperf.sock 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 530655 ']' 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:24.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.118 05:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:24.118 [2024-12-13 05:51:23.975254] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:24.118 [2024-12-13 05:51:23.975302] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530655 ] 00:35:24.118 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:24.118 Zero copy mechanism will not be used. 00:35:24.118 [2024-12-13 05:51:24.034889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.118 [2024-12-13 05:51:24.057066] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.379 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:24.379 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:24.379 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:24.379 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:24.379 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:24.379 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.379 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:24.379 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.379 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:24.379 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:24.638 nvme0n1 00:35:24.638 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:24.638 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.638 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:24.638 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.638 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:24.638 05:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:24.898 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:24.898 Zero copy mechanism will not be used. 00:35:24.898 Running I/O for 2 seconds... 00:35:24.898 [2024-12-13 05:51:24.703992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.898 [2024-12-13 05:51:24.704024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.898 [2024-12-13 05:51:24.704035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:24.898 [2024-12-13 05:51:24.709946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.898 [2024-12-13 05:51:24.709972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.898 [2024-12-13 05:51:24.709981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:24.898 [2024-12-13 05:51:24.717337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.898 [2024-12-13 05:51:24.717360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.898 [2024-12-13 05:51:24.717369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:24.898 [2024-12-13 05:51:24.724461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.898 [2024-12-13 05:51:24.724483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.898 [2024-12-13 05:51:24.724493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.898 [2024-12-13 05:51:24.731242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.898 [2024-12-13 05:51:24.731265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.731274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.737690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.737713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.737721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.743496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.743518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.743526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.749195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.749216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.749227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.754673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.754695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.754703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.760129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.760149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.760157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.765609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.765631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.765639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.770922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.770943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.770950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.776306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.776327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.776336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.781846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.781867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.781875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.787343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.787365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.787373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.792712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.792737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.792745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.798193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.798214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.798222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.803631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.803654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.803662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.808965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.808986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.808994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.814128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.814151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.814159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.819349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.819370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.819378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.824577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.824599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.824607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.829694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.829715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.829722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.834865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.834886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.834894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.840184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.840205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.840213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.845429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.845455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.845463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.850765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.850786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.850794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.856191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.856211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.856219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.861681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.861703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.861710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.867008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.867030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.867038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.872446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.872473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.872481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:24.899 [2024-12-13 05:51:24.877922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.899 [2024-12-13 05:51:24.877943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.899 [2024-12-13 05:51:24.877950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.900 [2024-12-13 05:51:24.883246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.900 [2024-12-13 05:51:24.883267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.900 [2024-12-13 05:51:24.883279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:24.900 [2024-12-13 05:51:24.888621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.900 [2024-12-13 05:51:24.888642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.900 [2024-12-13 05:51:24.888650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:24.900 [2024-12-13 05:51:24.893919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.900 [2024-12-13 05:51:24.893939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.900 [2024-12-13 05:51:24.893947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:24.900 [2024-12-13 05:51:24.899289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.900 [2024-12-13 05:51:24.899310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.900 [2024-12-13 05:51:24.899319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.900 [2024-12-13 05:51:24.904639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.900 [2024-12-13 05:51:24.904661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.900 [2024-12-13 05:51:24.904669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:24.900 [2024-12-13 05:51:24.910067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:24.900 [2024-12-13 05:51:24.910089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.900 [2024-12-13 05:51:24.910097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.160 [2024-12-13 05:51:24.915630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.160 [2024-12-13 05:51:24.915651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.915658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.921023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.921044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.921052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.926299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.926320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.926328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.931660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.931687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.931695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.936934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.936955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.936963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.942179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.942200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.942208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.947581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.947602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.947609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.952925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.952946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.952953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.958216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.958236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.958245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.963530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.963550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.963558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.968753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.968774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.968781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.973870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.973891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.973898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.978907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.978928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.978936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.984066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.984088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.984096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.989263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.989285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.989293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.994436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.994462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.994470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:24.999551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:24.999572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:24.999580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:25.004749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:25.004770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:25.004778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:25.010133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:25.010154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:25.010161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:25.015476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:25.015496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:25.015505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:25.021286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:25.021307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:25.021318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:25.026742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:25.026763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:25.026770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:25.032204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:25.032224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:25.032232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:25.037545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:25.037565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:25.037573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:25.042993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:25.043013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:25.043021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:25.049284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:25.049305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:25.049314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:25.055639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:25.055661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:25.055669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.161 [2024-12-13 05:51:25.063142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.161 [2024-12-13 05:51:25.063163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.161 [2024-12-13 05:51:25.063171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.070139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.070160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.070168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.077616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.077637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.077646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.085396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.085418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.085426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.091406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.091427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.091435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.096858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.096879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.096887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.102418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.102439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.102453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.107800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.107820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.107828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.113260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.113280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.113288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.118716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.118737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.118744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.124134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.124154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.124165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.129456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.129476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.129484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.134746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.134767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.134775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.140353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.140374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.140382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.147229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.147251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.147259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.154586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.154610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.154618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.161484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.161506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.161515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.167924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.167945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.167953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.162 [2024-12-13 05:51:25.173757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.162 [2024-12-13 05:51:25.173779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.162 [2024-12-13 05:51:25.173788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.180663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.180688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.180697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.188277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.188298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.188306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.195297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.195319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.195328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.202608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.202630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.202638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.208670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.208691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.208699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.214187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.214212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.214220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.220420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.220441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.220455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.227820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.227842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.227851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.236024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.236046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.236054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.243330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.243352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.243361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.249441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.249468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.249476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.255786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.255808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.255816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.263220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.263242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.263250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.271210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.271233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.271241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.277893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.277915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.277923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.284030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.284052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.284060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.291330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.291351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.291360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.297210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.297231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.297242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.302926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.302946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.302953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.308504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.308524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.308532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.314017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.314037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.314045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.319076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.319097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.319104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.324278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.324299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.324307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.327684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.327704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.327712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.331860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.331880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.331887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.337097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.337117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.337125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.342241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.342265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.342272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.347769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.347790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.347798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.353239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.353260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.423 [2024-12-13 05:51:25.353268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.423 [2024-12-13 05:51:25.358666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.423 [2024-12-13 05:51:25.358687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.358695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.364189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.364210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.364217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.369597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.369618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.369625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.375011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.375032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.375040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.380277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.380297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.380304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.385641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.385666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.385673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.391026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.391047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.391054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.396313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.396333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.396341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.402106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.402127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.402134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.407967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.407988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.407996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.413465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.413486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.413494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.419284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.419304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.419312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.424743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.424763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.424770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.430166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.430186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.430194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.424 [2024-12-13 05:51:25.435579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.424 [2024-12-13 05:51:25.435603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.424 [2024-12-13 05:51:25.435611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.441034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.441055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.441063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.445921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.445942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.445950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.451107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.451127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.451135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.456197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.456218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.456225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.461772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.461793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.461801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.467055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.467076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.467084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.472472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.472493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.472500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.477629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.477650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.477658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.482845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.482866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.482873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.488019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.488039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.488046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.493280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.493300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.493308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.498402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.498423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.498431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.503477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.503496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.503504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.508783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.508803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.508810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.514038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.684 [2024-12-13 05:51:25.514058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.684 [2024-12-13 05:51:25.514066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.684 [2024-12-13 05:51:25.519183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.519203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.519211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.522045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.522064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.522076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.527038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.527059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.527066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.532406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.532426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.532434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.537786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.537805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.537812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.543050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.543070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.543078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.548315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.548335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.548342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.554126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.554147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.554155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.559909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.559930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.559937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.565318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.565339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.565346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.570595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.570619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.570626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.576003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.576022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.576030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.581445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.581470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.581478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.586862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.586883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.586890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.592108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.592128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.592136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.597188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.597209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.597216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.602554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.602574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.602582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.607686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.607707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.607714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.612942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.612962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.612970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.618132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.618153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.618160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.623138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.623159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.623166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.628141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.628161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.628169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.633229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.633249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.633258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.638628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.638649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.638656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.643990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.644011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.644018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.649363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.649384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.649391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.654602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.654623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.654630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.685 [2024-12-13 05:51:25.659951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.685 [2024-12-13 05:51:25.659971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.685 [2024-12-13 05:51:25.659983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.686 [2024-12-13 05:51:25.665339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.686 [2024-12-13 05:51:25.665359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.686 [2024-12-13 05:51:25.665367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.686 [2024-12-13 05:51:25.670599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.686 [2024-12-13 05:51:25.670619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.686 [2024-12-13 05:51:25.670627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.686 [2024-12-13 05:51:25.676025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.686 [2024-12-13 05:51:25.676046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.686 [2024-12-13 05:51:25.676053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.686 [2024-12-13 05:51:25.681328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.686 [2024-12-13 05:51:25.681348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.686 [2024-12-13 05:51:25.681355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.686 [2024-12-13 05:51:25.686637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.686 [2024-12-13 05:51:25.686658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.686 [2024-12-13 05:51:25.686665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.686 [2024-12-13 05:51:25.692052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.686 [2024-12-13 05:51:25.692072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.686 [2024-12-13 05:51:25.692080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.946 5488.00 IOPS, 686.00 MiB/s [2024-12-13T04:51:25.961Z] [2024-12-13 05:51:25.698903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.698924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.698933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.946 [2024-12-13 05:51:25.704333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.704354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.704362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.946 [2024-12-13 05:51:25.709827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.709847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.709855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.946 [2024-12-13 05:51:25.715240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.715260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.715268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.946 [2024-12-13 05:51:25.720647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.720668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.720675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.946 [2024-12-13 05:51:25.726058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.726078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.726086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.946 [2024-12-13 05:51:25.731515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.731536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.731543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.946 [2024-12-13 05:51:25.736862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.736884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.736891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.946 [2024-12-13 05:51:25.742170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.742190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.742198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.946 [2024-12-13 05:51:25.747832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.747853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.747861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.946 [2024-12-13 05:51:25.753207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.753229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.753240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.946 [2024-12-13 05:51:25.758734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.758755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.758763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.946 [2024-12-13 05:51:25.764498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.764519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.764527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.946 [2024-12-13 05:51:25.770151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.946 [2024-12-13 05:51:25.770172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.946 [2024-12-13 05:51:25.770180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.775293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.775313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.775321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.780399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.780420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.780428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.785597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.785616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.785624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.790724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.790744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.790752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.795875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.795895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.795903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.801130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.801153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.801161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.806278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.806298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.806306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.811436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.811462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.811470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.816455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.816475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.816482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.821485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.821505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.821513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.826421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.826442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.826455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.831369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.831390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.831398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.836392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.836412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.836420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.841345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.841365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.841373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.846334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.846355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.846362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.851299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.851319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.851326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.856242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.856262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.856269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.861089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.861109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.861117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.866024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.866045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.866052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.871009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.871030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.871039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.875925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.875947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.875955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.880902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.880923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.880931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.885812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.885832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.885843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.890780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.890800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.890808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.895861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.895882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.895891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.900918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.947 [2024-12-13 05:51:25.900939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.947 [2024-12-13 05:51:25.900947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.947 [2024-12-13 05:51:25.905887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.948 [2024-12-13 05:51:25.905906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.948 [2024-12-13 05:51:25.905914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.948 [2024-12-13 05:51:25.910876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.948 [2024-12-13 05:51:25.910896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.948 [2024-12-13 05:51:25.910904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.948 [2024-12-13 05:51:25.916052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.948 [2024-12-13 05:51:25.916073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.948 [2024-12-13 05:51:25.916080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.948 [2024-12-13 05:51:25.921186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.948 [2024-12-13 05:51:25.921207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.948 [2024-12-13 05:51:25.921214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.948 [2024-12-13 05:51:25.926334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.948 [2024-12-13 05:51:25.926354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.948 [2024-12-13 05:51:25.926361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.948 [2024-12-13 05:51:25.931476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.948 [2024-12-13 05:51:25.931499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.948 [2024-12-13 05:51:25.931506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.948 [2024-12-13 05:51:25.936629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.948 [2024-12-13 05:51:25.936650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.948 [2024-12-13 05:51:25.936658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:25.948 [2024-12-13 05:51:25.941822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.948 [2024-12-13 05:51:25.941845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.948 [2024-12-13 05:51:25.941853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:25.948 [2024-12-13 05:51:25.946950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.948 [2024-12-13 05:51:25.946971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.948 [2024-12-13 05:51:25.946978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:25.948 [2024-12-13 05:51:25.953124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.948 [2024-12-13 05:51:25.953146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.948 [2024-12-13 05:51:25.953154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:25.948 [2024-12-13 05:51:25.960289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:25.948 [2024-12-13 05:51:25.960310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.948 [2024-12-13 05:51:25.960319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:25.966807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:25.966829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:25.966836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:25.972899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:25.972920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:25.972928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:25.978160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:25.978181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:25.978189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:25.983515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:25.983536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:25.983544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:25.988820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:25.988841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:25.988849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:25.993606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:25.993628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:25.993636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:25.998717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:25.998738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:25.998745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.003874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.003896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.003903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.008806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.008828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.008836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.013983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.014005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.014012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.017394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.017414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.017423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.021992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.022014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.022026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.027333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.027355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.027363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.032824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.032847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.032855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.038021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.038043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.038051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.043242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.043264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.043272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.047969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.047990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.047999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.053101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.053123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.053131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.058323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.058345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.058352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.063654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.063675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.063682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.068958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.068979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.068987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.074170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.074191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.074199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.079478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.079499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.079507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.084794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.084816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.084824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.209 [2024-12-13 05:51:26.090073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.209 [2024-12-13 05:51:26.090094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.209 [2024-12-13 05:51:26.090102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.095368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.095389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.095397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.100638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.100667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.100674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.105921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.105942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.105949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.111138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.111159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.111170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.116494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.116515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.116522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.121504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.121524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.121532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.126779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.126801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.126809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.132017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.132038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.132046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.137253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.137274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.137282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.142513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.142533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.142540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.147690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.147710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.147719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.152843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.152864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.152872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.158084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.158108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.158116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.163340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.163360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.163367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.168695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.168716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.168724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.173737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.173759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.173767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.179018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.179039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.179048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.184165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.184186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.184194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.187006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.187026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.187035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.192253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.192273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.192281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.197142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.197164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.197173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.202412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.202433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.202441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.207509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.207530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.207538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.213073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.213095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.213103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.210 [2024-12-13 05:51:26.218601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.210 [2024-12-13 05:51:26.218622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.210 [2024-12-13 05:51:26.218631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.471 [2024-12-13 05:51:26.224089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.471 [2024-12-13 05:51:26.224111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.471 [2024-12-13 05:51:26.224119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.471 [2024-12-13 05:51:26.229405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.471 [2024-12-13 05:51:26.229426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.471 [2024-12-13 05:51:26.229435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.471 [2024-12-13 05:51:26.234722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.471 [2024-12-13 05:51:26.234746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.471 [2024-12-13 05:51:26.234754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.471 [2024-12-13 05:51:26.240173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.471 [2024-12-13 05:51:26.240194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.471 [2024-12-13 05:51:26.240202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.471 [2024-12-13 05:51:26.245606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.471 [2024-12-13 05:51:26.245627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.471 [2024-12-13 05:51:26.245639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.471 [2024-12-13 05:51:26.250357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.471 [2024-12-13 05:51:26.250378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.471 [2024-12-13 05:51:26.250386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.471 [2024-12-13 05:51:26.255638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.471 [2024-12-13 05:51:26.255660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.471 [2024-12-13 05:51:26.255668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.261020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.261043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.261051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.266156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.266177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.266185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.269603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.269623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.269631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.273906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.273928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.273936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.279368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.279390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.279398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.283955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.283977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.283985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.289052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.289077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.289085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.294072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.294094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.294101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.299169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.299190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.299198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.304345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.304366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.304374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.309702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.309722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.309730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.314868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.314889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.314897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.319974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.319995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.320002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.325130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.325150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.325158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.330291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.330312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.330320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.335506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.335526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.335534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.340692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.340713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.340721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.345897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.345918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.345926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.351064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.351084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.351092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.356233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.356254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.356263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.361440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.361468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.361476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.366095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.366116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.366124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.371244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.371265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.371273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.376416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.376437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.376455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.381544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.381565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.381573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.386709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.386729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.386737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.391904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.391924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.472 [2024-12-13 05:51:26.391932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.472 [2024-12-13 05:51:26.397049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.472 [2024-12-13 05:51:26.397070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.397077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.402243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.402263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.402271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.407428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.407455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.407463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.412673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.412693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.412701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.417903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.417925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.417932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.423100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.423121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.423129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.429253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.429277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.429285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.434566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.434587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.434594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.439760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.439780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.439788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.444932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.444953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.444960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.450131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.450151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.450159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.454763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.454784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.454792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.459963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.459983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.459991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.465142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.465163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.465173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.470463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.470484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.470491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.475728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.475749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.475757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.473 [2024-12-13 05:51:26.480998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.473 [2024-12-13 05:51:26.481019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.473 [2024-12-13 05:51:26.481027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.486258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.486279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.486287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.491494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.491514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.491522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.496710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.496730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.496738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.501836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.501857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.501865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.506977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.506998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.507006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.512163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.512188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.512196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.517594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.517615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.517622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.523082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.523103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.523111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.528258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.528279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.528287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.533489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.533509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.533517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.538643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.538663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.538671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.543852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.543872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.543880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.549095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.549116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.549123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.554267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.554287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.554294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.559444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.559469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.559477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.564584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.564604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.564612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.569724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.569744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.569752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.574837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.574857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.574864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.579921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.579941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.579950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.585128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.585149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.585156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.590373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.590394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.590401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.595533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.595553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.595561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.600713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.600734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.600744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.605900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.605920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.605928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.611103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.611124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.611132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.734 [2024-12-13 05:51:26.616331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.734 [2024-12-13 05:51:26.616352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.734 [2024-12-13 05:51:26.616360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.621603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.621623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.621631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.626856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.626876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.626884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.632118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.632137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.632145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.637296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.637316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.637324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.642493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.642514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.642521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.647703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.647726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.647734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.652848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.652868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.652876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.658019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.658039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.658047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.663207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.663226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.663234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.668381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.668402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.668409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.673545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.673565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.673573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.678734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.678754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.678762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.683947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.683967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.683974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.689123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.689143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.689151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:26.735 [2024-12-13 05:51:26.694209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x233dc50) 00:35:26.735 [2024-12-13 05:51:26.694229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.735 [2024-12-13 05:51:26.694237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:26.735 5741.50 IOPS, 717.69 MiB/s 00:35:26.735 Latency(us) 00:35:26.735 [2024-12-13T04:51:26.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.735 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:26.735 nvme0n1 : 2.00 5742.90 717.86 0.00 0.00 2783.33 694.37 10423.34 00:35:26.735 [2024-12-13T04:51:26.750Z] =================================================================================================================== 00:35:26.735 [2024-12-13T04:51:26.750Z] Total : 5742.90 717.86 0.00 0.00 2783.33 694.37 10423.34 00:35:26.735 { 00:35:26.735 "results": [ 00:35:26.735 { 00:35:26.735 "job": "nvme0n1", 00:35:26.735 "core_mask": "0x2", 00:35:26.735 "workload": "randread", 00:35:26.735 "status": "finished", 00:35:26.735 "queue_depth": 16, 00:35:26.735 "io_size": 131072, 00:35:26.735 "runtime": 2.002298, 00:35:26.735 "iops": 5742.90140628418, 00:35:26.735 "mibps": 717.8626757855225, 00:35:26.735 "io_failed": 0, 00:35:26.735 "io_timeout": 0, 00:35:26.735 "avg_latency_us": 2783.325792139275, 00:35:26.735 "min_latency_us": 694.3695238095238, 00:35:26.735 "max_latency_us": 10423.344761904762 00:35:26.735 } 00:35:26.735 ], 00:35:26.735 "core_count": 1 00:35:26.735 } 00:35:26.735 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:26.735 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:26.735 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:26.735 | .driver_specific 00:35:26.735 | .nvme_error 00:35:26.735 | .status_code 00:35:26.735 | .command_transient_transport_error' 00:35:26.735 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:26.994 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 370 > 0 )) 00:35:26.994 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 530655 00:35:26.994 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 530655 ']' 00:35:26.994 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 530655 00:35:26.994 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:26.994 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:26.994 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530655 00:35:26.994 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:26.994 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:26.994 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530655' 00:35:26.994 killing process with pid 530655 00:35:26.994 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 530655 00:35:26.994 Received shutdown signal, test time was about 2.000000 seconds 00:35:26.994 00:35:26.994 Latency(us) 00:35:26.994 [2024-12-13T04:51:27.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.994 [2024-12-13T04:51:27.009Z] =================================================================================================================== 00:35:26.994 [2024-12-13T04:51:27.009Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:26.994 05:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 530655 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=531113 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 531113 /var/tmp/bperf.sock 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 531113 ']' 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:27.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.254 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:27.254 [2024-12-13 05:51:27.158563] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:27.254 [2024-12-13 05:51:27.158610] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531113 ] 00:35:27.254 [2024-12-13 05:51:27.233022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.254 [2024-12-13 05:51:27.255567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.513 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.513 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:27.513 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:27.513 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:27.772 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:27.772 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.772 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:27.772 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.772 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:27.772 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:28.031 nvme0n1 00:35:28.032 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:28.032 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.032 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:28.032 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.032 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:28.032 05:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:28.291 Running I/O for 2 seconds... 00:35:28.291 [2024-12-13 05:51:28.075535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede470 00:35:28.291 [2024-12-13 05:51:28.076509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.291 [2024-12-13 05:51:28.076537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.291 [2024-12-13 05:51:28.083892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede8a8 00:35:28.291 [2024-12-13 05:51:28.084751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.291 [2024-12-13 05:51:28.084772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:28.291 [2024-12-13 05:51:28.092265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eef270 00:35:28.291 [2024-12-13 05:51:28.093045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.291 [2024-12-13 05:51:28.093065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.291 [2024-12-13 05:51:28.102896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efac10 00:35:28.291 [2024-12-13 05:51:28.103968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.291 [2024-12-13 05:51:28.103987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.111356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efa3a0 00:35:28.292 [2024-12-13 05:51:28.112425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.112444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.119663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef4f40 00:35:28.292 [2024-12-13 05:51:28.120371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.120389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.128662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eed0b0 00:35:28.292 [2024-12-13 05:51:28.129189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.129208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.138875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee3d08 00:35:28.292 [2024-12-13 05:51:28.140187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.140206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.147173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eeb760 00:35:28.292 [2024-12-13 05:51:28.148155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.148173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.156161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef20d8 00:35:28.292 [2024-12-13 05:51:28.156925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.156943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.164604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efc998 00:35:28.292 [2024-12-13 05:51:28.165288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.165307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.173571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede8a8 00:35:28.292 [2024-12-13 05:51:28.174564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.174582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.182597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee27f0 00:35:28.292 [2024-12-13 05:51:28.183601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.183619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.191485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee1710 00:35:28.292 [2024-12-13 05:51:28.192479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.192497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.200373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eea680 00:35:28.292 [2024-12-13 05:51:28.201380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.201398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.209258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef35f0 00:35:28.292 [2024-12-13 05:51:28.210277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.210299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.218195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef1868 00:35:28.292 [2024-12-13 05:51:28.219190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.219209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.227052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef0788 00:35:28.292 [2024-12-13 05:51:28.228057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.228076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.235347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee99d8 00:35:28.292 [2024-12-13 05:51:28.236328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.236346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.244623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef92c0 00:35:28.292 [2024-12-13 05:51:28.245715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.245733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.253876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee27f0 00:35:28.292 [2024-12-13 05:51:28.255097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.255116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.262259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef8a50 00:35:28.292 [2024-12-13 05:51:28.263261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.263280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.271238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee23b8 00:35:28.292 [2024-12-13 05:51:28.272112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.272130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.279639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee3d08 00:35:28.292 [2024-12-13 05:51:28.280478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.280496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.288914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede8a8 00:35:28.292 [2024-12-13 05:51:28.289924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.289942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:28.292 [2024-12-13 05:51:28.297908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee0a68 00:35:28.292 [2024-12-13 05:51:28.298462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.292 [2024-12-13 05:51:28.298480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:28.552 [2024-12-13 05:51:28.306952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee3498 00:35:28.552 [2024-12-13 05:51:28.307765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.552 [2024-12-13 05:51:28.307784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:28.552 [2024-12-13 05:51:28.315373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee3498 00:35:28.552 [2024-12-13 05:51:28.316118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.552 [2024-12-13 05:51:28.316137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:28.552 [2024-12-13 05:51:28.325982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee5220 00:35:28.552 [2024-12-13 05:51:28.327281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.552 [2024-12-13 05:51:28.327300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:28.552 [2024-12-13 05:51:28.334371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef20d8 00:35:28.552 [2024-12-13 05:51:28.335673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.552 [2024-12-13 05:51:28.335690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:28.552 [2024-12-13 05:51:28.342080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eefae0 00:35:28.552 [2024-12-13 05:51:28.342772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.552 [2024-12-13 05:51:28.342790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:28.552 [2024-12-13 05:51:28.353216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee9e10 00:35:28.552 [2024-12-13 05:51:28.354282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.552 [2024-12-13 05:51:28.354300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:28.552 [2024-12-13 05:51:28.363044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef9b30 00:35:28.552 [2024-12-13 05:51:28.364489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.552 [2024-12-13 05:51:28.364507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:28.552 [2024-12-13 05:51:28.372210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee88f8 00:35:28.552 [2024-12-13 05:51:28.373708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.552 [2024-12-13 05:51:28.373725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:28.552 [2024-12-13 05:51:28.378648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eeb328 00:35:28.552 [2024-12-13 05:51:28.379429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.552 [2024-12-13 05:51:28.379446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:28.552 [2024-12-13 05:51:28.389508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef31b8 00:35:28.552 [2024-12-13 05:51:28.390713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.552 [2024-12-13 05:51:28.390731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:28.552 [2024-12-13 05:51:28.398961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede038 00:35:28.552 [2024-12-13 05:51:28.400339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.552 [2024-12-13 05:51:28.400356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.408222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef6458 00:35:28.553 [2024-12-13 05:51:28.409719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.409736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.414533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eefae0 00:35:28.553 [2024-12-13 05:51:28.415240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.415258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.422949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee5ec8 00:35:28.553 [2024-12-13 05:51:28.423629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.423646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.432812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efb8b8 00:35:28.553 [2024-12-13 05:51:28.433521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.433539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.441947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eeff18 00:35:28.553 [2024-12-13 05:51:28.442859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.442880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.452055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef57b0 00:35:28.553 [2024-12-13 05:51:28.453611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.453629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.461518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef8a50 00:35:28.553 [2024-12-13 05:51:28.463007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.463025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.467841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede470 00:35:28.553 [2024-12-13 05:51:28.468516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.468534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.476924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eeb328 00:35:28.553 [2024-12-13 05:51:28.477501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.477520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.485074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efc998 00:35:28.553 [2024-12-13 05:51:28.485724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.485743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.494129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee12d8 00:35:28.553 [2024-12-13 05:51:28.494779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.494798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.504569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee01f8 00:35:28.553 [2024-12-13 05:51:28.505591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.505609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.513008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eeee38 00:35:28.553 [2024-12-13 05:51:28.513902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.513920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.521277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eedd58 00:35:28.553 [2024-12-13 05:51:28.522085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.522103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.529850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee2c28 00:35:28.553 [2024-12-13 05:51:28.530628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.530645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.539133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef3a28 00:35:28.553 [2024-12-13 05:51:28.540027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.540045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.548132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef20d8 00:35:28.553 [2024-12-13 05:51:28.549030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.549048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:28.553 [2024-12-13 05:51:28.556914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efc998 00:35:28.553 [2024-12-13 05:51:28.557359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.553 [2024-12-13 05:51:28.557377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.567653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eed4e8 00:35:28.813 [2024-12-13 05:51:28.569077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.569095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.574320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efbcf0 00:35:28.813 [2024-12-13 05:51:28.574999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.575017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.585223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef6458 00:35:28.813 [2024-12-13 05:51:28.586428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.586446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.594796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee5a90 00:35:28.813 [2024-12-13 05:51:28.596142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.596160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.602996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eeaab8 00:35:28.813 [2024-12-13 05:51:28.604355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.604373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.613045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef46d0 00:35:28.813 [2024-12-13 05:51:28.614165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.614184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.619686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee6300 00:35:28.813 [2024-12-13 05:51:28.620385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.620403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.630469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eee190 00:35:28.813 [2024-12-13 05:51:28.631546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.631564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.638879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee1710 00:35:28.813 [2024-12-13 05:51:28.639923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.639941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.648133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef4298 00:35:28.813 [2024-12-13 05:51:28.649321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.649339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.657145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee5a90 00:35:28.813 [2024-12-13 05:51:28.657878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.657895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.666227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016edf118 00:35:28.813 [2024-12-13 05:51:28.667197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.667216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.675329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eea680 00:35:28.813 [2024-12-13 05:51:28.676497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.676518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.682581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef7da8 00:35:28.813 [2024-12-13 05:51:28.683178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.813 [2024-12-13 05:51:28.683196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:28.813 [2024-12-13 05:51:28.691683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efda78 00:35:28.814 [2024-12-13 05:51:28.692483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.692501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.700967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eef6a8 00:35:28.814 [2024-12-13 05:51:28.701925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.701943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.710168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efe720 00:35:28.814 [2024-12-13 05:51:28.710871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.710889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.718803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efb480 00:35:28.814 [2024-12-13 05:51:28.719805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.719823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.727718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef4f40 00:35:28.814 [2024-12-13 05:51:28.728707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.728725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.737114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efbcf0 00:35:28.814 [2024-12-13 05:51:28.738298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.738317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.746122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef8618 00:35:28.814 [2024-12-13 05:51:28.746856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.746874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.754483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee49b0 00:35:28.814 [2024-12-13 05:51:28.755775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.755793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.762116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eec408 00:35:28.814 [2024-12-13 05:51:28.762810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.762834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.771150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eedd58 00:35:28.814 [2024-12-13 05:51:28.771850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.771868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.781595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eee5c8 00:35:28.814 [2024-12-13 05:51:28.782686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.782704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.789987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef9b30 00:35:28.814 [2024-12-13 05:51:28.790917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.790935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.798331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee84c0 00:35:28.814 [2024-12-13 05:51:28.799166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.799183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.807470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016edf988 00:35:28.814 [2024-12-13 05:51:28.808214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.808232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.816800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eeee38 00:35:28.814 [2024-12-13 05:51:28.817872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.817890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:28.814 [2024-12-13 05:51:28.825159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efb480 00:35:28.814 [2024-12-13 05:51:28.825799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.814 [2024-12-13 05:51:28.825817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.836389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef6458 00:35:29.074 [2024-12-13 05:51:28.837874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.837892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.844819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef0ff8 00:35:29.074 [2024-12-13 05:51:28.845840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.845859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.853022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef1868 00:35:29.074 [2024-12-13 05:51:28.854389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.854407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.860751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef96f8 00:35:29.074 [2024-12-13 05:51:28.861514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.861534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.871843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efd208 00:35:29.074 [2024-12-13 05:51:28.873070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.873089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.880738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef6cc8 00:35:29.074 [2024-12-13 05:51:28.881639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.881657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.889062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efb048 00:35:29.074 [2024-12-13 05:51:28.890049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.890067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.898088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016edece0 00:35:29.074 [2024-12-13 05:51:28.899114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.899132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.907660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede470 00:35:29.074 [2024-12-13 05:51:28.908831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.908852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.916153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee5ec8 00:35:29.074 [2024-12-13 05:51:28.916921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.916940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.925089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef4f40 00:35:29.074 [2024-12-13 05:51:28.925835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.925854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.934602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef31b8 00:35:29.074 [2024-12-13 05:51:28.935721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.935740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.942962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef2d80 00:35:29.074 [2024-12-13 05:51:28.943824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.943843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.951994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef7100 00:35:29.074 [2024-12-13 05:51:28.952882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.952900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.962975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eea680 00:35:29.074 [2024-12-13 05:51:28.964548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.074 [2024-12-13 05:51:28.964566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:29.074 [2024-12-13 05:51:28.969516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef92c0 00:35:29.074 [2024-12-13 05:51:28.970194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:28.970211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:29.075 [2024-12-13 05:51:28.978788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee5220 00:35:29.075 [2024-12-13 05:51:28.979672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:28.979691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:29.075 [2024-12-13 05:51:28.989668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef7100 00:35:29.075 [2024-12-13 05:51:28.990969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:28.990987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:29.075 [2024-12-13 05:51:28.998963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee1710 00:35:29.075 [2024-12-13 05:51:29.000385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:29.000403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:29.075 [2024-12-13 05:51:29.005291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee3d08 00:35:29.075 [2024-12-13 05:51:29.005999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:29.006017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:29.075 [2024-12-13 05:51:29.014577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eea680 00:35:29.075 [2024-12-13 05:51:29.015395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:29.015413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:29.075 [2024-12-13 05:51:29.025563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eec408 00:35:29.075 [2024-12-13 05:51:29.026976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:29.026994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:29.075 [2024-12-13 05:51:29.031867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee4de8 00:35:29.075 [2024-12-13 05:51:29.032489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:29.032507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:29.075 [2024-12-13 05:51:29.041468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016edece0 00:35:29.075 [2024-12-13 05:51:29.042414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:29.042433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:29.075 [2024-12-13 05:51:29.050808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee3060 00:35:29.075 [2024-12-13 05:51:29.051753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:29.051771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:29.075 [2024-12-13 05:51:29.059557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef9b30 00:35:29.075 [2024-12-13 05:51:29.060818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:29.060836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:29.075 28281.00 IOPS, 110.47 MiB/s [2024-12-13T04:51:29.090Z] [2024-12-13 05:51:29.068455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef0ff8 00:35:29.075 [2024-12-13 05:51:29.069487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:29.069506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:29.075 [2024-12-13 05:51:29.076762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef4298 00:35:29.075 [2024-12-13 05:51:29.077365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:29.077384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:29.075 [2024-12-13 05:51:29.085695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee49b0 00:35:29.075 [2024-12-13 05:51:29.086278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.075 [2024-12-13 05:51:29.086296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:29.335 [2024-12-13 05:51:29.095952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee9e10 00:35:29.335 [2024-12-13 05:51:29.097124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.335 [2024-12-13 05:51:29.097143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:29.335 [2024-12-13 05:51:29.105223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef20d8 00:35:29.335 [2024-12-13 05:51:29.105963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.335 [2024-12-13 05:51:29.105983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:29.335 [2024-12-13 05:51:29.114438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef4b08 00:35:29.335 [2024-12-13 05:51:29.115424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.335 [2024-12-13 05:51:29.115443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:29.335 [2024-12-13 05:51:29.122888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eeaef0 00:35:29.335 [2024-12-13 05:51:29.123942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.335 [2024-12-13 05:51:29.123961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:29.335 [2024-12-13 05:51:29.131859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede470 00:35:29.335 [2024-12-13 05:51:29.132893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.335 [2024-12-13 05:51:29.132912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:29.335 [2024-12-13 05:51:29.142718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede8a8 00:35:29.335 [2024-12-13 05:51:29.144152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.335 [2024-12-13 05:51:29.144174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:29.335 [2024-12-13 05:51:29.148986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee1b48 00:35:29.335 [2024-12-13 05:51:29.149598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.335 [2024-12-13 05:51:29.149617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:29.335 [2024-12-13 05:51:29.159092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef3a28 00:35:29.335 [2024-12-13 05:51:29.160164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.335 [2024-12-13 05:51:29.160182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:29.335 [2024-12-13 05:51:29.166226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef57b0 00:35:29.335 [2024-12-13 05:51:29.166814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.335 [2024-12-13 05:51:29.166833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:29.335 [2024-12-13 05:51:29.175511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee5658 00:35:29.335 [2024-12-13 05:51:29.176212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.176230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.184808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee4140 00:35:29.336 [2024-12-13 05:51:29.185757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.185775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.193688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eed0b0 00:35:29.336 [2024-12-13 05:51:29.194280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.194298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.201813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef4b08 00:35:29.336 [2024-12-13 05:51:29.202491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.202509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.211085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef31b8 00:35:29.336 [2024-12-13 05:51:29.211843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.211862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.220142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016edf550 00:35:29.336 [2024-12-13 05:51:29.220901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.220919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.230792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016edf988 00:35:29.336 [2024-12-13 05:51:29.232066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.232084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.239256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee27f0 00:35:29.336 [2024-12-13 05:51:29.240304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.240322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.248233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eeaef0 00:35:29.336 [2024-12-13 05:51:29.249216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.249234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.257194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eea680 00:35:29.336 [2024-12-13 05:51:29.257821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.257840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.265635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee4140 00:35:29.336 [2024-12-13 05:51:29.266579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.266598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.274965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef7538 00:35:29.336 [2024-12-13 05:51:29.276039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.276056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.284236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eed920 00:35:29.336 [2024-12-13 05:51:29.285407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.285425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.292462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef57b0 00:35:29.336 [2024-12-13 05:51:29.293192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.293209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.301643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee12d8 00:35:29.336 [2024-12-13 05:51:29.302700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.302718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.310644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee5a90 00:35:29.336 [2024-12-13 05:51:29.311726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.311743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.319410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee7c50 00:35:29.336 [2024-12-13 05:51:29.320029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.320047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.328800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef35f0 00:35:29.336 [2024-12-13 05:51:29.329564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.329582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.337272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eebb98 00:35:29.336 [2024-12-13 05:51:29.338342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.338360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:29.336 [2024-12-13 05:51:29.346305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efa3a0 00:35:29.336 [2024-12-13 05:51:29.346939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.336 [2024-12-13 05:51:29.346957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.356053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef81e0 00:35:29.596 [2024-12-13 05:51:29.356811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.596 [2024-12-13 05:51:29.356830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.364558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee8088 00:35:29.596 [2024-12-13 05:51:29.365936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.596 [2024-12-13 05:51:29.365954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.374445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016edf550 00:35:29.596 [2024-12-13 05:51:29.375545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.596 [2024-12-13 05:51:29.375566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.380998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee73e0 00:35:29.596 [2024-12-13 05:51:29.381708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.596 [2024-12-13 05:51:29.381726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.392047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede038 00:35:29.596 [2024-12-13 05:51:29.393249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.596 [2024-12-13 05:51:29.393267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.400395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee3060 00:35:29.596 [2024-12-13 05:51:29.401343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.596 [2024-12-13 05:51:29.401361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.409475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee8088 00:35:29.596 [2024-12-13 05:51:29.410440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.596 [2024-12-13 05:51:29.410462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.418884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef1430 00:35:29.596 [2024-12-13 05:51:29.419991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.596 [2024-12-13 05:51:29.420010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.427729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede038 00:35:29.596 [2024-12-13 05:51:29.428515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.596 [2024-12-13 05:51:29.428533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.436474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede038 00:35:29.596 [2024-12-13 05:51:29.437250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.596 [2024-12-13 05:51:29.437268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.445736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eee190 00:35:29.596 [2024-12-13 05:51:29.446835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.596 [2024-12-13 05:51:29.446854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.454735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee8d30 00:35:29.596 [2024-12-13 05:51:29.455393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.596 [2024-12-13 05:51:29.455410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.463686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef4298 00:35:29.596 [2024-12-13 05:51:29.464588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.596 [2024-12-13 05:51:29.464606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:29.596 [2024-12-13 05:51:29.472040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee4de8 00:35:29.596 [2024-12-13 05:51:29.473187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.473205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.482385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efd640 00:35:29.597 [2024-12-13 05:51:29.483508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.483527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.490603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee6300 00:35:29.597 [2024-12-13 05:51:29.491572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.491590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.498926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede8a8 00:35:29.597 [2024-12-13 05:51:29.499789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.499807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.508062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eea680 00:35:29.597 [2024-12-13 05:51:29.508927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.508945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.516445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee4de8 00:35:29.597 [2024-12-13 05:51:29.517088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.517106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.525215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efe720 00:35:29.597 [2024-12-13 05:51:29.526068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.526085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.535987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef1430 00:35:29.597 [2024-12-13 05:51:29.537227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.537246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.543248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee1b48 00:35:29.597 [2024-12-13 05:51:29.543986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.544005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.552425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef4f40 00:35:29.597 [2024-12-13 05:51:29.552945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.552964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.561681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee3060 00:35:29.597 [2024-12-13 05:51:29.562435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.562464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.572017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef0788 00:35:29.597 [2024-12-13 05:51:29.573452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.573469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.581325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef4298 00:35:29.597 [2024-12-13 05:51:29.582963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.582980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.587699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee4578 00:35:29.597 [2024-12-13 05:51:29.588402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.588420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.596122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee1f80 00:35:29.597 [2024-12-13 05:51:29.596833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.596851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:29.597 [2024-12-13 05:51:29.606262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee2c28 00:35:29.597 [2024-12-13 05:51:29.607060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.597 [2024-12-13 05:51:29.607082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.615335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016edfdc0 00:35:29.857 [2024-12-13 05:51:29.616121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.616140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.624510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efd208 00:35:29.857 [2024-12-13 05:51:29.625281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.625298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.633672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef1430 00:35:29.857 [2024-12-13 05:51:29.634326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.634344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.643035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eea248 00:35:29.857 [2024-12-13 05:51:29.643930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.643949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.652156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eec840 00:35:29.857 [2024-12-13 05:51:29.653268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.653286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.661137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef5be8 00:35:29.857 [2024-12-13 05:51:29.662204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.662222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.669263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efd640 00:35:29.857 [2024-12-13 05:51:29.670724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.670741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.677012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eee190 00:35:29.857 [2024-12-13 05:51:29.677715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.677733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.686896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee38d0 00:35:29.857 [2024-12-13 05:51:29.687774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.687793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.696103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef46d0 00:35:29.857 [2024-12-13 05:51:29.697056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.697074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.704563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eec840 00:35:29.857 [2024-12-13 05:51:29.705507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.705525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.714421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016edf550 00:35:29.857 [2024-12-13 05:51:29.715516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.715533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.723592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee6fa8 00:35:29.857 [2024-12-13 05:51:29.724761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.724779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.731126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef4298 00:35:29.857 [2024-12-13 05:51:29.731638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.731656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.740197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee6fa8 00:35:29.857 [2024-12-13 05:51:29.741064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.741081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.749333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efdeb0 00:35:29.857 [2024-12-13 05:51:29.749973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.749991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.758647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee7818 00:35:29.857 [2024-12-13 05:51:29.759553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.759571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.767705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef9b30 00:35:29.857 [2024-12-13 05:51:29.768795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.857 [2024-12-13 05:51:29.768813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:29.857 [2024-12-13 05:51:29.776609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee23b8 00:35:29.857 [2024-12-13 05:51:29.777655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.858 [2024-12-13 05:51:29.777672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:29.858 [2024-12-13 05:51:29.784855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee5220 00:35:29.858 [2024-12-13 05:51:29.786133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.858 [2024-12-13 05:51:29.786151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:29.858 [2024-12-13 05:51:29.793062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef31b8 00:35:29.858 [2024-12-13 05:51:29.793804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.858 [2024-12-13 05:51:29.793822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:29.858 [2024-12-13 05:51:29.801355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efc998 00:35:29.858 [2024-12-13 05:51:29.802046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.858 [2024-12-13 05:51:29.802064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:29.858 [2024-12-13 05:51:29.811260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee84c0 00:35:29.858 [2024-12-13 05:51:29.812164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.858 [2024-12-13 05:51:29.812183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:29.858 [2024-12-13 05:51:29.820180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ede8a8 00:35:29.858 [2024-12-13 05:51:29.821106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.858 [2024-12-13 05:51:29.821124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:29.858 [2024-12-13 05:51:29.829099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef6cc8 00:35:29.858 [2024-12-13 05:51:29.829928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.858 [2024-12-13 05:51:29.829946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:29.858 [2024-12-13 05:51:29.838258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eeea00 00:35:29.858 [2024-12-13 05:51:29.839207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.858 [2024-12-13 05:51:29.839228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:29.858 [2024-12-13 05:51:29.848392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efbcf0 00:35:29.858 [2024-12-13 05:51:29.849860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.858 [2024-12-13 05:51:29.849877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:29.858 [2024-12-13 05:51:29.856877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee4578 00:35:29.858 [2024-12-13 05:51:29.857990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.858 [2024-12-13 05:51:29.858008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:29.858 [2024-12-13 05:51:29.864894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef31b8 00:35:29.858 [2024-12-13 05:51:29.866384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.858 [2024-12-13 05:51:29.866403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.873528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efd640 00:35:30.118 [2024-12-13 05:51:29.874237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.874256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.882569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef1430 00:35:30.118 [2024-12-13 05:51:29.883277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.883295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.891478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eef270 00:35:30.118 [2024-12-13 05:51:29.892204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.892222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.900570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef0ff8 00:35:30.118 [2024-12-13 05:51:29.901282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.901299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.909436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee6738 00:35:30.118 [2024-12-13 05:51:29.910149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.910167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.918309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef1ca0 00:35:30.118 [2024-12-13 05:51:29.919033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.919051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.927227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee1b48 00:35:30.118 [2024-12-13 05:51:29.927949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.927966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.936109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef6020 00:35:30.118 [2024-12-13 05:51:29.936806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.936824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.945460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee95a0 00:35:30.118 [2024-12-13 05:51:29.946273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.946291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.954778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee4140 00:35:30.118 [2024-12-13 05:51:29.955735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.955753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.963780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef31b8 00:35:30.118 [2024-12-13 05:51:29.964739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.964758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.972690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef7da8 00:35:30.118 [2024-12-13 05:51:29.973668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.973686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.981847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016edf988 00:35:30.118 [2024-12-13 05:51:29.982895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.982913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.990977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef0bc0 00:35:30.118 [2024-12-13 05:51:29.992088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:29.992106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:29.999865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016edfdc0 00:35:30.118 [2024-12-13 05:51:30.000986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:30.001013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:30.008988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eea248 00:35:30.118 [2024-12-13 05:51:30.010089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:30.010110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:30.018493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee73e0 00:35:30.118 [2024-12-13 05:51:30.019599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:30.019621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:30.027637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee5a90 00:35:30.118 [2024-12-13 05:51:30.028743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:30.028762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:30.037368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016efac10 00:35:30.118 [2024-12-13 05:51:30.038453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:30.038475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:30.045925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016eeaef0 00:35:30.118 [2024-12-13 05:51:30.046998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:30.047017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:30.055581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ee6738 00:35:30.118 [2024-12-13 05:51:30.056711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:30.056731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:30.118 [2024-12-13 05:51:30.064322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed0e0) with pdu=0x200016ef0bc0 00:35:30.118 28399.50 IOPS, 110.94 MiB/s [2024-12-13T04:51:30.133Z] [2024-12-13 05:51:30.065257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.118 [2024-12-13 05:51:30.065274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:30.118 00:35:30.118 Latency(us) 00:35:30.118 [2024-12-13T04:51:30.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.118 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:30.118 nvme0n1 : 2.00 28404.25 110.95 0.00 0.00 4500.54 1817.84 14417.92 00:35:30.118 [2024-12-13T04:51:30.133Z] =================================================================================================================== 00:35:30.118 [2024-12-13T04:51:30.133Z] Total : 28404.25 110.95 0.00 0.00 4500.54 1817.84 14417.92 00:35:30.118 { 00:35:30.118 "results": [ 00:35:30.118 { 00:35:30.118 "job": "nvme0n1", 00:35:30.118 "core_mask": "0x2", 00:35:30.118 "workload": "randwrite", 00:35:30.118 "status": "finished", 00:35:30.118 "queue_depth": 128, 00:35:30.118 "io_size": 4096, 00:35:30.118 "runtime": 2.004172, 00:35:30.118 "iops": 28404.248737134338, 00:35:30.118 "mibps": 110.954096629431, 00:35:30.118 "io_failed": 0, 00:35:30.118 "io_timeout": 0, 00:35:30.118 "avg_latency_us": 4500.542336810635, 00:35:30.118 "min_latency_us": 1817.8438095238096, 00:35:30.118 "max_latency_us": 14417.92 00:35:30.118 } 00:35:30.118 ], 00:35:30.118 "core_count": 1 00:35:30.118 } 00:35:30.118 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:30.119 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:30.119 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:30.119 | .driver_specific 00:35:30.119 | .nvme_error 00:35:30.119 | .status_code 00:35:30.119 | .command_transient_transport_error' 00:35:30.119 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:30.378 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 223 > 0 )) 00:35:30.378 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 531113 00:35:30.378 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 531113 ']' 00:35:30.378 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 531113 00:35:30.378 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:30.378 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.378 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531113 00:35:30.378 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:30.378 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:30.378 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531113' 00:35:30.378 killing process with pid 531113 00:35:30.378 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 531113 00:35:30.378 Received shutdown signal, test time was about 2.000000 seconds 00:35:30.378 00:35:30.378 Latency(us) 00:35:30.378 [2024-12-13T04:51:30.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.378 [2024-12-13T04:51:30.393Z] =================================================================================================================== 00:35:30.378 [2024-12-13T04:51:30.393Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:30.378 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 531113 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=531685 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 531685 /var/tmp/bperf.sock 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 531685 ']' 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:30.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:30.638 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:30.638 [2024-12-13 05:51:30.551149] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:30.638 [2024-12-13 05:51:30.551197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531685 ] 00:35:30.638 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:30.638 Zero copy mechanism will not be used. 00:35:30.638 [2024-12-13 05:51:30.610697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.638 [2024-12-13 05:51:30.632780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:30.898 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:30.898 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:30.898 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:30.898 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:31.157 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:31.157 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.157 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.157 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.157 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:31.157 05:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:31.418 nvme0n1 00:35:31.418 05:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:31.418 05:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.418 05:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.418 05:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.418 05:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:31.418 05:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:31.418 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:31.418 Zero copy mechanism will not be used. 00:35:31.418 Running I/O for 2 seconds... 00:35:31.418 [2024-12-13 05:51:31.320845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.320949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.320979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.325724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.325798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.325820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.330369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.330445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.330471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.335272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.335351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.335372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.339985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.340072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.340091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.345263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.345334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.345353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.351049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.351124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.351142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.356688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.356758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.356778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.361439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.361516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.361535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.367156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.367215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.367233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.372756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.372821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.372839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.378348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.378401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.378419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.383749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.383820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.383838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.388944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.389000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.389018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.394043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.394103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.394121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.398736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.398791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.398809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.403428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.403494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.403512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.407860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.407956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.407974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.418 [2024-12-13 05:51:31.412313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.418 [2024-12-13 05:51:31.412415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.418 [2024-12-13 05:51:31.412432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.419 [2024-12-13 05:51:31.416980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.419 [2024-12-13 05:51:31.417031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.419 [2024-12-13 05:51:31.417049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.419 [2024-12-13 05:51:31.421663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.419 [2024-12-13 05:51:31.421763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.419 [2024-12-13 05:51:31.421780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.419 [2024-12-13 05:51:31.426302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.419 [2024-12-13 05:51:31.426417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.419 [2024-12-13 05:51:31.426434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.419 [2024-12-13 05:51:31.431010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.419 [2024-12-13 05:51:31.431082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.419 [2024-12-13 05:51:31.431100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.435716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.435773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.435791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.440156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.440252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.440270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.444802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.444875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.444896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.449518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.449646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.449664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.454891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.454943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.454962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.460094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.460165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.460182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.465240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.465392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.465410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.470962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.471030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.471048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.476046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.476106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.476123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.480726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.480828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.480845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.485587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.485641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.485658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.490602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.490678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.490696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.495518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.495570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.495587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.500189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.500287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.500304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.505015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.505065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.505082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.510246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.510302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.510320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.515406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.515478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.515496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.520718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.520768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.520785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.525950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.526025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.526043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.531075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.531163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.531180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.536117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.536190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.536208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.541443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.541526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.541543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.546289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.546345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.546363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.551028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.551110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.551127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.556184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.556240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.556258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.561402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.561556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.561575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.566269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.566334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.566352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.570907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.570965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.570982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.575580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.575639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.575661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.580974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.581041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.581059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.586170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.680 [2024-12-13 05:51:31.586246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.680 [2024-12-13 05:51:31.586264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.680 [2024-12-13 05:51:31.591209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.591281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.591299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.595978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.596088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.596105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.600398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.600487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.600506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.604831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.604887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.604905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.609171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.609223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.609241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.613519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.613600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.613618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.617915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.617974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.617992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.622282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.622376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.622394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.626587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.626651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.626668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.630868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.630937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.630955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.635224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.635336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.635353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.639540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.639608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.639625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.643822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.643884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.643901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.648188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.648250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.648267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.652435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.652502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.652519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.656778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.656842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.656860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.661066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.661133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.661151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.665363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.665433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.665457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.669710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.669776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.669793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.674005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.674064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.674082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.678283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.678349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.678366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.682964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.683063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.683080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.681 [2024-12-13 05:51:31.688872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.681 [2024-12-13 05:51:31.689013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.681 [2024-12-13 05:51:31.689030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.694884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.695122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.695145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.701417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.701780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.701800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.708313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.708603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.708623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.713934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.714152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.714170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.718169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.718390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.718409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.722315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.722492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.722509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.726216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.726422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.726440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.730245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.730462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.730481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.734383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.734594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.734613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.738795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.739006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.739025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.743003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.743209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.743228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.746970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.747178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.747196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.751219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.751416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.751435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.756227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.756424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.756443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.760978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.761184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.761202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.765218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.765429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.765454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.769457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.769670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.769688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.773675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.773882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.773900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.777741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.777961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.777980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.781922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.782127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.782145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.785943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.786135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.786159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.789672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.789884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.789902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.793599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.793791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.793808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.797841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.798012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.943 [2024-12-13 05:51:31.798029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.943 [2024-12-13 05:51:31.802480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.943 [2024-12-13 05:51:31.802659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.802676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.807090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.807250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.807268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.811645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.811828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.811849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.815667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.815858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.815875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.819518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.819709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.819726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.823360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.823581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.823599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.827436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.827639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.827666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.831419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.831622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.831642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.835820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.835935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.835953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.841242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.841580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.841599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.847138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.847318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.847336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.851760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.852021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.852040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.856117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.856334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.856353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.860477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.860663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.860681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.864362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.864568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.864586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.868687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.868895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.868915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.873724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.873964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.873983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.878961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.879176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.879195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.884347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.884532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.884549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.889445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.889629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.889647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.894761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.894940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.894958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.899872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.900168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.900187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.904965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.905246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.905265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.910580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.910816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.910834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.915679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.915910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.915929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.920807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.920985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.921002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.926045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.926310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.926329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.931462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.944 [2024-12-13 05:51:31.931699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.944 [2024-12-13 05:51:31.931718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.944 [2024-12-13 05:51:31.936222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.945 [2024-12-13 05:51:31.936399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.945 [2024-12-13 05:51:31.936420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.945 [2024-12-13 05:51:31.940115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.945 [2024-12-13 05:51:31.940278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.945 [2024-12-13 05:51:31.940296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.945 [2024-12-13 05:51:31.944270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.945 [2024-12-13 05:51:31.944440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.945 [2024-12-13 05:51:31.944463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.945 [2024-12-13 05:51:31.948366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.945 [2024-12-13 05:51:31.948526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.945 [2024-12-13 05:51:31.948543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.945 [2024-12-13 05:51:31.952467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.945 [2024-12-13 05:51:31.952628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.945 [2024-12-13 05:51:31.952645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.945 [2024-12-13 05:51:31.956414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:31.945 [2024-12-13 05:51:31.956584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.945 [2024-12-13 05:51:31.956603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:31.960346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:31.960490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.206 [2024-12-13 05:51:31.960508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:31.964930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:31.965150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.206 [2024-12-13 05:51:31.965169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:31.970299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:31.970434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.206 [2024-12-13 05:51:31.970459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:31.974569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:31.974809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.206 [2024-12-13 05:51:31.974827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:31.979609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:31.979769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.206 [2024-12-13 05:51:31.979786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:31.984809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:31.984975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.206 [2024-12-13 05:51:31.984993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:31.989881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:31.990035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.206 [2024-12-13 05:51:31.990053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:31.995457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:31.995641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.206 [2024-12-13 05:51:31.995659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:32.000850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:32.001027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.206 [2024-12-13 05:51:32.001045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:32.005008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:32.005106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.206 [2024-12-13 05:51:32.005124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:32.009223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:32.009358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.206 [2024-12-13 05:51:32.009376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:32.013416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:32.013619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.206 [2024-12-13 05:51:32.013638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:32.017616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:32.017813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.206 [2024-12-13 05:51:32.017839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.206 [2024-12-13 05:51:32.021756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.206 [2024-12-13 05:51:32.021925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.021943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.025666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.025825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.025842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.030347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.030582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.030601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.035761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.036017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.036036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.039749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.039872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.039890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.043667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.043837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.043854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.047829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.048024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.048041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.051888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.052099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.052121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.056156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.056317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.056334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.060199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.060356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.060373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.064335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.064495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.064513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.068337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.068534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.068551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.072418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.072615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.072634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.076737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.076916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.076934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.080924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.081066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.081083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.085189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.085351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.085369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.089305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.089436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.089462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.093265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.093477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.093495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.097281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.097504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.097524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.101339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.101569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.101588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.105326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.105498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.105516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.109397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.109574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.109591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.113437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.113628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.113646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.117568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.117742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.117759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.121774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.207 [2024-12-13 05:51:32.121937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.207 [2024-12-13 05:51:32.121955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.207 [2024-12-13 05:51:32.125839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.126005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.126022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.129899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.130099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.130116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.134040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.134208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.134225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.138077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.138232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.138250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.142997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.143153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.143170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.147310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.147453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.147471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.151333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.151563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.151582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.156409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.156632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.156651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.161376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.161520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.161541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.165289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.165464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.165481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.169275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.169471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.169489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.173316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.173493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.173510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.176999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.177165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.177182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.180816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.180988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.181005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.185732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.185973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.185991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.190743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.190912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.190929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.194944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.195114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.195131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.199222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.199351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.199368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.203315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.203515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.203533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.207504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.207671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.207688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.211511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.211668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.211685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.215766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.215911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.215929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.208 [2024-12-13 05:51:32.220053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.208 [2024-12-13 05:51:32.220164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.208 [2024-12-13 05:51:32.220182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.468 [2024-12-13 05:51:32.225722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.468 [2024-12-13 05:51:32.225849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.468 [2024-12-13 05:51:32.225867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.468 [2024-12-13 05:51:32.231035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.468 [2024-12-13 05:51:32.231188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.468 [2024-12-13 05:51:32.231205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.468 [2024-12-13 05:51:32.237244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.468 [2024-12-13 05:51:32.237363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.468 [2024-12-13 05:51:32.237380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.468 [2024-12-13 05:51:32.243595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.468 [2024-12-13 05:51:32.243723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.468 [2024-12-13 05:51:32.243741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.468 [2024-12-13 05:51:32.249698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.468 [2024-12-13 05:51:32.249948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.468 [2024-12-13 05:51:32.249967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.468 [2024-12-13 05:51:32.255340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.255576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.255595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.260828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.260983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.261000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.265169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.265316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.265334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.269173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.269359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.269377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.273542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.273702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.273719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.278044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.278202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.278220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.282547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.282705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.282725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.286852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.287067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.287085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.290867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.291005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.291022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.294980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.295129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.295146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.298959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.299145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.299162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.303109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.303294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.303311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.307049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.307204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.307221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.310715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.310921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.310939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.315418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.316768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.316788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.469 6664.00 IOPS, 833.00 MiB/s [2024-12-13T04:51:32.484Z] [2024-12-13 05:51:32.321822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.321995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.322012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.326589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.326683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.326703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.331646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.331769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.331790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.336547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.336600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.336618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.340690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.340744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.340762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.344823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.344905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.344923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.349006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.349076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.349094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.353328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.353399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.353417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.357430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.357508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.357526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.361503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.361573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.361591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.365584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.365636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.469 [2024-12-13 05:51:32.365654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.469 [2024-12-13 05:51:32.369571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.469 [2024-12-13 05:51:32.369641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.369659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.373748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.373805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.373822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.377725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.377788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.377805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.381691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.381746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.381764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.385865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.385930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.385947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.390342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.390395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.390413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.395354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.395405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.395429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.400573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.400625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.400643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.405470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.405522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.405539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.410288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.410342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.410359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.414587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.414661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.414678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.418863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.418919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.418936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.423005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.423061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.423079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.427389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.427446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.427470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.431750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.431801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.431818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.436114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.436176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.436194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.440342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.440436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.440461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.444390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.444444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.444467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.448434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.448495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.448513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.452515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.452566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.452584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.456629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.456693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.456711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.460670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.460732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.460749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.464678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.464730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.464747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.468743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.468842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.468859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.473004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.473064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.473081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.477468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.477518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.477535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.470 [2024-12-13 05:51:32.482532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.470 [2024-12-13 05:51:32.482589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.470 [2024-12-13 05:51:32.482606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.731 [2024-12-13 05:51:32.487507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.731 [2024-12-13 05:51:32.487563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.731 [2024-12-13 05:51:32.487580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.731 [2024-12-13 05:51:32.491903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.731 [2024-12-13 05:51:32.491951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.731 [2024-12-13 05:51:32.491969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.731 [2024-12-13 05:51:32.497785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.731 [2024-12-13 05:51:32.497883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.731 [2024-12-13 05:51:32.497901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.731 [2024-12-13 05:51:32.502946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.731 [2024-12-13 05:51:32.503063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.731 [2024-12-13 05:51:32.503080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.731 [2024-12-13 05:51:32.507664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.731 [2024-12-13 05:51:32.507716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.731 [2024-12-13 05:51:32.507749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.731 [2024-12-13 05:51:32.512007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.512077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.512098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.516214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.516282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.516300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.520340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.520399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.520416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.524499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.524555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.524572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.528641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.528692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.528709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.532716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.532767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.532784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.536771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.536865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.536882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.540958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.541038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.541056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.545265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.545325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.545342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.549501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.549562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.549581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.553970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.554028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.554045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.558595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.558662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.558680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.563817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.563871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.563888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.568174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.568230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.568247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.572573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.572626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.572643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.577198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.577251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.577269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.582031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.582080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.582097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.587157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.587210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.587228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.592172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.592233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.592250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.596815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.596882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.596899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.601209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.601285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.601302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.606032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.606110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.606128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.611070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.611124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.611142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.615869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.615920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.615938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.620804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.620866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.620884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.625846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.625925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.625943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.631189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.631244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.732 [2024-12-13 05:51:32.631281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.732 [2024-12-13 05:51:32.635920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.732 [2024-12-13 05:51:32.635990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.636007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.640565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.640669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.640687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.645017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.645089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.645106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.649287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.649366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.649383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.653457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.653580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.653598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.657805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.657871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.657888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.662231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.662288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.662306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.666483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.666577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.666595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.670871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.670939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.670957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.675337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.675404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.675422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.680277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.680384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.680401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.685555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.685626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.685643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.691077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.691147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.691165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.696019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.696071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.696088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.701051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.701103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.701120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.705822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.705887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.705904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.711143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.711209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.711226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.715646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.715700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.715717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.720802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.720873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.720891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.726826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.726889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.726906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.731394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.731460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.731478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.735636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.735685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.735703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.739856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.739915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.739932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.733 [2024-12-13 05:51:32.744075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.733 [2024-12-13 05:51:32.744160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.733 [2024-12-13 05:51:32.744178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.748502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.748570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.748588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.753009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.753115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.753136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.758036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.758092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.758109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.762176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.762237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.762254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.766209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.766278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.766295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.770344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.770394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.770427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.774125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.774189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.774206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.777894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.777993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.778011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.781642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.781707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.781724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.785341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.785402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.785420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.789018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.789096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.789114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.792772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.792833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.792850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.796535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.796600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.796617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.800278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.800341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.800358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.804016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.804073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.804090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.807746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.807797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.807814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.811445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.811505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.811523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.815142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.815211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.815229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.818849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.818913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.818931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.822577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.822630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.822648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.995 [2024-12-13 05:51:32.826234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.995 [2024-12-13 05:51:32.826292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.995 [2024-12-13 05:51:32.826309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.829999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.830055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.830073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.833861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.833951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.833968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.838956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.839054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.839072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.844189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.844284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.844301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.848569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.848682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.848700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.852941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.853065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.853083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.856965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.857040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.857062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.861226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.861415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.861433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.866601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.866746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.866764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.871395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.871480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.871498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.875981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.876087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.876104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.880733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.880845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.880863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.885587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.885663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.885680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.890664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.890745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.890762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.897010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.897117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.897134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.903454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.903514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.903531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.909464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.909593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.909611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.916079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.916208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.916226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.922330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.922463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.922481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.928877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.929066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.929083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.936085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.936216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.936234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.942922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.943123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.943142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.949602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.949781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.949798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.956496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.956650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.956668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.963102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.963213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.963230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.969889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.996 [2024-12-13 05:51:32.970006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.996 [2024-12-13 05:51:32.970023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.996 [2024-12-13 05:51:32.977193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.997 [2024-12-13 05:51:32.977357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.997 [2024-12-13 05:51:32.977374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.997 [2024-12-13 05:51:32.983781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.997 [2024-12-13 05:51:32.983977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.997 [2024-12-13 05:51:32.984001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.997 [2024-12-13 05:51:32.990383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.997 [2024-12-13 05:51:32.990491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.997 [2024-12-13 05:51:32.990507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.997 [2024-12-13 05:51:32.997734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.997 [2024-12-13 05:51:32.997859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.997 [2024-12-13 05:51:32.997876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.997 [2024-12-13 05:51:33.004548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:32.997 [2024-12-13 05:51:33.004655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.997 [2024-12-13 05:51:33.004673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.011382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.011530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.011547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.018152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.018290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.018311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.024770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.024943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.024961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.031415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.031581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.031598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.037380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.037433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.037456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.042694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.042747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.042764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.046848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.046914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.046932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.050817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.050867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.050884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.054643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.054692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.054709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.058538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.058592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.058609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.062518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.062574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.062591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.066430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.066488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.066505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.070465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.070544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.070562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.074331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.074388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.074405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.078185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.078237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.078254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.082235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.082323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.082341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.087055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.087192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.087225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.091365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.091422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.091439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.095168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.095268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.095285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.099047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.099118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.099136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.258 [2024-12-13 05:51:33.102880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.258 [2024-12-13 05:51:33.102970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.258 [2024-12-13 05:51:33.102987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.106812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.106881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.106898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.110631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.110708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.110726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.114494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.114582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.114600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.118488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.118556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.118573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.123188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.123288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.123305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.127260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.127310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.127326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.131384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.131442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.131468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.135301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.135351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.135368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.139037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.139127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.139145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.142838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.142904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.142922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.146576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.146647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.146665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.150325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.150396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.150413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.154112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.154172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.154189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.157859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.157912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.157930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.161609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.161659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.161677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.165304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.165385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.165403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.169226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.169319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.169337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.172927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.172995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.173013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.176585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.176664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.176682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.180311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.180375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.180393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.184014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.184082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.184099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.187712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.187769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.187786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.191394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.191463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.191480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.195150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.195211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.195229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.259 [2024-12-13 05:51:33.198841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.259 [2024-12-13 05:51:33.198894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.259 [2024-12-13 05:51:33.198911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.202547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.202615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.202633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.206221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.206278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.206295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.209922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.209981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.209998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.213866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.213958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.213976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.218732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.218812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.218830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.223604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.223718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.223735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.228461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.228636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.228654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.233771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.233955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.233976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.239283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.239402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.239420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.245630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.245800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.245818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.251636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.251797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.251814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.257807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.257933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.257951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.264068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.264236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.264254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.260 [2024-12-13 05:51:33.270120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.260 [2024-12-13 05:51:33.270220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.260 [2024-12-13 05:51:33.270238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.520 [2024-12-13 05:51:33.276606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.520 [2024-12-13 05:51:33.276695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.520 [2024-12-13 05:51:33.276712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.520 [2024-12-13 05:51:33.282974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.520 [2024-12-13 05:51:33.283171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.520 [2024-12-13 05:51:33.283197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.520 [2024-12-13 05:51:33.289193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.520 [2024-12-13 05:51:33.289293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.520 [2024-12-13 05:51:33.289311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.520 [2024-12-13 05:51:33.295619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.520 [2024-12-13 05:51:33.295753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.520 [2024-12-13 05:51:33.295771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.520 [2024-12-13 05:51:33.301711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.520 [2024-12-13 05:51:33.301919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.520 [2024-12-13 05:51:33.301938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.520 [2024-12-13 05:51:33.307647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.520 [2024-12-13 05:51:33.307770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.520 [2024-12-13 05:51:33.307787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.520 [2024-12-13 05:51:33.313601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.520 [2024-12-13 05:51:33.313770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.520 [2024-12-13 05:51:33.313788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.520 6645.00 IOPS, 830.62 MiB/s [2024-12-13T04:51:33.535Z] [2024-12-13 05:51:33.320200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ed5c0) with pdu=0x200016eff3c8 00:35:33.520 [2024-12-13 05:51:33.320376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.520 [2024-12-13 05:51:33.320394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.520 00:35:33.520 Latency(us) 00:35:33.520 [2024-12-13T04:51:33.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.520 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:33.520 nvme0n1 : 2.00 6639.81 829.98 0.00 0.00 2404.94 1739.82 10485.76 00:35:33.520 [2024-12-13T04:51:33.535Z] =================================================================================================================== 00:35:33.520 [2024-12-13T04:51:33.535Z] Total : 6639.81 829.98 0.00 0.00 2404.94 1739.82 10485.76 00:35:33.520 { 00:35:33.520 "results": [ 00:35:33.520 { 00:35:33.520 "job": "nvme0n1", 00:35:33.520 "core_mask": "0x2", 00:35:33.520 "workload": "randwrite", 00:35:33.520 "status": "finished", 00:35:33.520 "queue_depth": 16, 00:35:33.521 "io_size": 131072, 00:35:33.521 "runtime": 2.004576, 00:35:33.521 "iops": 6639.8081190236735, 00:35:33.521 "mibps": 829.9760148779592, 00:35:33.521 "io_failed": 0, 00:35:33.521 "io_timeout": 0, 00:35:33.521 "avg_latency_us": 2404.9365566884903, 00:35:33.521 "min_latency_us": 1739.824761904762, 00:35:33.521 "max_latency_us": 10485.76 00:35:33.521 } 00:35:33.521 ], 00:35:33.521 "core_count": 1 00:35:33.521 } 00:35:33.521 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:33.521 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:33.521 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:33.521 | .driver_specific 00:35:33.521 | .nvme_error 00:35:33.521 | .status_code 00:35:33.521 | .command_transient_transport_error' 00:35:33.521 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 430 > 0 )) 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 531685 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 531685 ']' 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 531685 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531685 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531685' 00:35:33.780 killing process with pid 531685 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 531685 00:35:33.780 Received shutdown signal, test time was about 2.000000 seconds 00:35:33.780 00:35:33.780 Latency(us) 00:35:33.780 [2024-12-13T04:51:33.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.780 [2024-12-13T04:51:33.795Z] =================================================================================================================== 00:35:33.780 [2024-12-13T04:51:33.795Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 531685 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 529963 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 529963 ']' 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 529963 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:33.780 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 529963 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 529963' 00:35:34.038 killing process with pid 529963 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 529963 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 529963 00:35:34.038 00:35:34.038 real 0m13.690s 00:35:34.038 user 0m26.175s 00:35:34.038 sys 0m4.529s 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:34.038 ************************************ 00:35:34.038 END TEST nvmf_digest_error 00:35:34.038 ************************************ 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:34.038 05:51:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:34.038 rmmod nvme_tcp 00:35:34.038 rmmod nvme_fabrics 00:35:34.038 rmmod nvme_keyring 00:35:34.038 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:34.038 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:34.038 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:34.038 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 529963 ']' 00:35:34.038 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 529963 00:35:34.038 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 529963 ']' 00:35:34.038 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 529963 00:35:34.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (529963) - No such process 00:35:34.038 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 529963 is not found' 00:35:34.038 Process with pid 529963 is not found 00:35:34.038 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:34.038 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:34.038 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:34.039 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:34.039 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:34.039 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:34.039 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:34.039 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:34.039 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:34.039 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.297 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:34.297 05:51:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.203 05:51:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:36.203 00:35:36.203 real 0m35.930s 00:35:36.203 user 0m54.380s 00:35:36.203 sys 0m13.662s 00:35:36.203 05:51:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:36.203 05:51:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:36.203 ************************************ 00:35:36.203 END TEST nvmf_digest 00:35:36.203 ************************************ 00:35:36.203 05:51:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:36.203 05:51:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:36.203 05:51:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:36.203 05:51:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:36.203 05:51:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:36.203 05:51:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:36.203 05:51:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.203 ************************************ 00:35:36.203 START TEST nvmf_bdevperf 00:35:36.203 ************************************ 00:35:36.203 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:36.462 * Looking for test storage... 00:35:36.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:36.462 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:36.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.463 --rc genhtml_branch_coverage=1 00:35:36.463 --rc genhtml_function_coverage=1 00:35:36.463 --rc genhtml_legend=1 00:35:36.463 --rc geninfo_all_blocks=1 00:35:36.463 --rc geninfo_unexecuted_blocks=1 00:35:36.463 00:35:36.463 ' 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:36.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.463 --rc genhtml_branch_coverage=1 00:35:36.463 --rc genhtml_function_coverage=1 00:35:36.463 --rc genhtml_legend=1 00:35:36.463 --rc geninfo_all_blocks=1 00:35:36.463 --rc geninfo_unexecuted_blocks=1 00:35:36.463 00:35:36.463 ' 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:36.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.463 --rc genhtml_branch_coverage=1 00:35:36.463 --rc genhtml_function_coverage=1 00:35:36.463 --rc genhtml_legend=1 00:35:36.463 --rc geninfo_all_blocks=1 00:35:36.463 --rc geninfo_unexecuted_blocks=1 00:35:36.463 00:35:36.463 ' 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:36.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.463 --rc genhtml_branch_coverage=1 00:35:36.463 --rc genhtml_function_coverage=1 00:35:36.463 --rc genhtml_legend=1 00:35:36.463 --rc geninfo_all_blocks=1 00:35:36.463 --rc geninfo_unexecuted_blocks=1 00:35:36.463 00:35:36.463 ' 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:36.463 05:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:43.031 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:43.031 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:43.031 Found net devices under 0000:af:00.0: cvl_0_0 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:43.031 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:43.032 Found net devices under 0000:af:00.1: cvl_0_1 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:43.032 05:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:43.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:43.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:35:43.032 00:35:43.032 --- 10.0.0.2 ping statistics --- 00:35:43.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.032 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:43.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:43.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:35:43.032 00:35:43.032 --- 10.0.0.1 ping statistics --- 00:35:43.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.032 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=535717 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 535717 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 535717 ']' 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:43.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.032 [2024-12-13 05:51:42.322108] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:43.032 [2024-12-13 05:51:42.322156] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:43.032 [2024-12-13 05:51:42.401784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:43.032 [2024-12-13 05:51:42.424640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:43.032 [2024-12-13 05:51:42.424676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:43.032 [2024-12-13 05:51:42.424683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:43.032 [2024-12-13 05:51:42.424688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:43.032 [2024-12-13 05:51:42.424694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:43.032 [2024-12-13 05:51:42.425911] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:35:43.032 [2024-12-13 05:51:42.426021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.032 [2024-12-13 05:51:42.426022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.032 [2024-12-13 05:51:42.556571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.032 Malloc0 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.032 [2024-12-13 05:51:42.621664] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:43.032 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:43.032 { 00:35:43.032 "params": { 00:35:43.032 "name": "Nvme$subsystem", 00:35:43.032 "trtype": "$TEST_TRANSPORT", 00:35:43.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:43.032 "adrfam": "ipv4", 00:35:43.032 "trsvcid": "$NVMF_PORT", 00:35:43.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:43.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:43.033 "hdgst": ${hdgst:-false}, 00:35:43.033 "ddgst": ${ddgst:-false} 00:35:43.033 }, 00:35:43.033 "method": "bdev_nvme_attach_controller" 00:35:43.033 } 00:35:43.033 EOF 00:35:43.033 )") 00:35:43.033 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:43.033 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:43.033 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:43.033 05:51:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:43.033 "params": { 00:35:43.033 "name": "Nvme1", 00:35:43.033 "trtype": "tcp", 00:35:43.033 "traddr": "10.0.0.2", 00:35:43.033 "adrfam": "ipv4", 00:35:43.033 "trsvcid": "4420", 00:35:43.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:43.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:43.033 "hdgst": false, 00:35:43.033 "ddgst": false 00:35:43.033 }, 00:35:43.033 "method": "bdev_nvme_attach_controller" 00:35:43.033 }' 00:35:43.033 [2024-12-13 05:51:42.672387] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:43.033 [2024-12-13 05:51:42.672427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid535753 ] 00:35:43.033 [2024-12-13 05:51:42.746954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.033 [2024-12-13 05:51:42.769192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:43.033 Running I/O for 1 seconds... 00:35:44.406 11257.00 IOPS, 43.97 MiB/s 00:35:44.406 Latency(us) 00:35:44.406 [2024-12-13T04:51:44.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.406 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:44.406 Verification LBA range: start 0x0 length 0x4000 00:35:44.406 Nvme1n1 : 1.01 11294.17 44.12 0.00 0.00 11291.14 2465.40 13793.77 00:35:44.406 [2024-12-13T04:51:44.421Z] =================================================================================================================== 00:35:44.406 [2024-12-13T04:51:44.421Z] Total : 11294.17 44.12 0.00 0.00 11291.14 2465.40 13793.77 00:35:44.406 05:51:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=535974 00:35:44.406 05:51:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:44.406 05:51:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:44.406 05:51:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:44.406 05:51:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:44.406 05:51:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:44.406 05:51:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:44.406 05:51:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:44.406 { 00:35:44.406 "params": { 00:35:44.406 "name": "Nvme$subsystem", 00:35:44.406 "trtype": "$TEST_TRANSPORT", 00:35:44.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:44.406 "adrfam": "ipv4", 00:35:44.406 "trsvcid": "$NVMF_PORT", 00:35:44.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:44.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:44.406 "hdgst": ${hdgst:-false}, 00:35:44.406 "ddgst": ${ddgst:-false} 00:35:44.406 }, 00:35:44.406 "method": "bdev_nvme_attach_controller" 00:35:44.406 } 00:35:44.406 EOF 00:35:44.406 )") 00:35:44.406 05:51:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:44.406 05:51:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:44.406 05:51:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:44.406 05:51:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:44.406 "params": { 00:35:44.406 "name": "Nvme1", 00:35:44.406 "trtype": "tcp", 00:35:44.406 "traddr": "10.0.0.2", 00:35:44.406 "adrfam": "ipv4", 00:35:44.406 "trsvcid": "4420", 00:35:44.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:44.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:44.406 "hdgst": false, 00:35:44.406 "ddgst": false 00:35:44.406 }, 00:35:44.406 "method": "bdev_nvme_attach_controller" 00:35:44.406 }' 00:35:44.406 [2024-12-13 05:51:44.209705] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:44.406 [2024-12-13 05:51:44.209751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid535974 ] 00:35:44.406 [2024-12-13 05:51:44.284941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.406 [2024-12-13 05:51:44.305249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.664 Running I/O for 15 seconds... 00:35:46.654 11349.00 IOPS, 44.33 MiB/s [2024-12-13T04:51:47.267Z] 11327.50 IOPS, 44.25 MiB/s [2024-12-13T04:51:47.267Z] 05:51:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 535717 00:35:47.252 05:51:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:47.252 [2024-12-13 05:51:47.179649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.252 [2024-12-13 05:51:47.179686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.252 [2024-12-13 05:51:47.179713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.252 [2024-12-13 05:51:47.179731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.252 [2024-12-13 05:51:47.179748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.252 [2024-12-13 05:51:47.179764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.252 [2024-12-13 05:51:47.179779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.179794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.179815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.179831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.179847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.179862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.179878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.179893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.179909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.179924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.179939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.179959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.179975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.179983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.179991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.180000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.180010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.180019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.180028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.180037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.252 [2024-12-13 05:51:47.180045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.180054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.180062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.180072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.252 [2024-12-13 05:51:47.180080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.252 [2024-12-13 05:51:47.180088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.253 [2024-12-13 05:51:47.180565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.253 [2024-12-13 05:51:47.180573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.180979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.180987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.254 [2024-12-13 05:51:47.180993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.181000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.254 [2024-12-13 05:51:47.181007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.181014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.181020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.181028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.181035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.181043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.181049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.181057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.181063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.181070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.181077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.181085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.181091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.181099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.254 [2024-12-13 05:51:47.181105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.254 [2024-12-13 05:51:47.181114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.255 [2024-12-13 05:51:47.181681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.181689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1827cb0 is same with the state(6) to be set 00:35:47.255 [2024-12-13 05:51:47.181698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:47.255 [2024-12-13 05:51:47.181703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:47.255 [2024-12-13 05:51:47.181708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100096 len:8 PRP1 0x0 PRP2 0x0 00:35:47.255 [2024-12-13 05:51:47.181715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:47.255 [2024-12-13 05:51:47.184560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.255 [2024-12-13 05:51:47.184615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.255 [2024-12-13 05:51:47.185214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.255 [2024-12-13 05:51:47.185229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.255 [2024-12-13 05:51:47.185236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.255 [2024-12-13 05:51:47.185411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.255 [2024-12-13 05:51:47.185591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.255 [2024-12-13 05:51:47.185599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.255 [2024-12-13 05:51:47.185610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.255 [2024-12-13 05:51:47.185618] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.255 [2024-12-13 05:51:47.197831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.255 [2024-12-13 05:51:47.198258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.256 [2024-12-13 05:51:47.198276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.256 [2024-12-13 05:51:47.198284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.256 [2024-12-13 05:51:47.198462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.256 [2024-12-13 05:51:47.198636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.256 [2024-12-13 05:51:47.198644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.256 [2024-12-13 05:51:47.198651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.256 [2024-12-13 05:51:47.198657] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.256 [2024-12-13 05:51:47.210837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.256 [2024-12-13 05:51:47.211259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.256 [2024-12-13 05:51:47.211305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.256 [2024-12-13 05:51:47.211328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.256 [2024-12-13 05:51:47.211927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.256 [2024-12-13 05:51:47.212400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.256 [2024-12-13 05:51:47.212408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.256 [2024-12-13 05:51:47.212414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.256 [2024-12-13 05:51:47.212420] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.256 [2024-12-13 05:51:47.223656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.256 [2024-12-13 05:51:47.224080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.256 [2024-12-13 05:51:47.224096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.256 [2024-12-13 05:51:47.224103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.256 [2024-12-13 05:51:47.224270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.256 [2024-12-13 05:51:47.224438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.256 [2024-12-13 05:51:47.224446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.256 [2024-12-13 05:51:47.224460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.256 [2024-12-13 05:51:47.224466] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.256 [2024-12-13 05:51:47.236520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.256 [2024-12-13 05:51:47.236942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.256 [2024-12-13 05:51:47.236958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.256 [2024-12-13 05:51:47.236965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.256 [2024-12-13 05:51:47.237125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.256 [2024-12-13 05:51:47.237284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.256 [2024-12-13 05:51:47.237291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.256 [2024-12-13 05:51:47.237297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.256 [2024-12-13 05:51:47.237303] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.256 [2024-12-13 05:51:47.249497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.256 [2024-12-13 05:51:47.249848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.256 [2024-12-13 05:51:47.249864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.256 [2024-12-13 05:51:47.249871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.256 [2024-12-13 05:51:47.250044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.256 [2024-12-13 05:51:47.250216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.256 [2024-12-13 05:51:47.250224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.256 [2024-12-13 05:51:47.250230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.256 [2024-12-13 05:51:47.250236] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.537 [2024-12-13 05:51:47.262617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.537 [2024-12-13 05:51:47.263053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.537 [2024-12-13 05:51:47.263070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.537 [2024-12-13 05:51:47.263077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.537 [2024-12-13 05:51:47.263250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.537 [2024-12-13 05:51:47.263423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.537 [2024-12-13 05:51:47.263431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.537 [2024-12-13 05:51:47.263437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.537 [2024-12-13 05:51:47.263443] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.537 [2024-12-13 05:51:47.275585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.537 [2024-12-13 05:51:47.276021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.537 [2024-12-13 05:51:47.276037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.537 [2024-12-13 05:51:47.276048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.538 [2024-12-13 05:51:47.276221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.538 [2024-12-13 05:51:47.276394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.538 [2024-12-13 05:51:47.276402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.538 [2024-12-13 05:51:47.276408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.538 [2024-12-13 05:51:47.276414] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.538 [2024-12-13 05:51:47.288652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.538 [2024-12-13 05:51:47.289005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.538 [2024-12-13 05:51:47.289021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.538 [2024-12-13 05:51:47.289029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.538 [2024-12-13 05:51:47.289201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.538 [2024-12-13 05:51:47.289373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.538 [2024-12-13 05:51:47.289381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.538 [2024-12-13 05:51:47.289387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.538 [2024-12-13 05:51:47.289393] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.538 [2024-12-13 05:51:47.301668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.538 [2024-12-13 05:51:47.302040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.538 [2024-12-13 05:51:47.302056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.538 [2024-12-13 05:51:47.302064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.538 [2024-12-13 05:51:47.302236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.538 [2024-12-13 05:51:47.302408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.538 [2024-12-13 05:51:47.302416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.538 [2024-12-13 05:51:47.302422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.538 [2024-12-13 05:51:47.302428] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.538 [2024-12-13 05:51:47.314532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.538 [2024-12-13 05:51:47.314944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.538 [2024-12-13 05:51:47.314959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.538 [2024-12-13 05:51:47.314965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.538 [2024-12-13 05:51:47.315124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.538 [2024-12-13 05:51:47.315285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.538 [2024-12-13 05:51:47.315293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.538 [2024-12-13 05:51:47.315298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.538 [2024-12-13 05:51:47.315304] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.538 [2024-12-13 05:51:47.327355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.538 [2024-12-13 05:51:47.327799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.538 [2024-12-13 05:51:47.327815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.538 [2024-12-13 05:51:47.327822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.538 [2024-12-13 05:51:47.327990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.538 [2024-12-13 05:51:47.328157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.538 [2024-12-13 05:51:47.328165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.538 [2024-12-13 05:51:47.328172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.538 [2024-12-13 05:51:47.328177] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.538 [2024-12-13 05:51:47.340221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.538 [2024-12-13 05:51:47.340656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.538 [2024-12-13 05:51:47.340672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.538 [2024-12-13 05:51:47.340678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.538 [2024-12-13 05:51:47.340837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.538 [2024-12-13 05:51:47.340996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.538 [2024-12-13 05:51:47.341003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.538 [2024-12-13 05:51:47.341009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.538 [2024-12-13 05:51:47.341015] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.538 [2024-12-13 05:51:47.352961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.538 [2024-12-13 05:51:47.353406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.538 [2024-12-13 05:51:47.353425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.538 [2024-12-13 05:51:47.353432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.538 [2024-12-13 05:51:47.353628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.538 [2024-12-13 05:51:47.353802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.538 [2024-12-13 05:51:47.353810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.538 [2024-12-13 05:51:47.353820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.538 [2024-12-13 05:51:47.353827] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.538 [2024-12-13 05:51:47.365710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.538 [2024-12-13 05:51:47.366133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.538 [2024-12-13 05:51:47.366148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.538 [2024-12-13 05:51:47.366155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.538 [2024-12-13 05:51:47.366314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.538 [2024-12-13 05:51:47.366494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.538 [2024-12-13 05:51:47.366503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.538 [2024-12-13 05:51:47.366509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.538 [2024-12-13 05:51:47.366515] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.538 [2024-12-13 05:51:47.378559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.538 [2024-12-13 05:51:47.378978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.538 [2024-12-13 05:51:47.378994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.538 [2024-12-13 05:51:47.379001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.538 [2024-12-13 05:51:47.379159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.538 [2024-12-13 05:51:47.379318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.538 [2024-12-13 05:51:47.379326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.538 [2024-12-13 05:51:47.379332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.538 [2024-12-13 05:51:47.379337] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.538 [2024-12-13 05:51:47.391395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.538 [2024-12-13 05:51:47.391708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.539 [2024-12-13 05:51:47.391724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.539 [2024-12-13 05:51:47.391731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.539 [2024-12-13 05:51:47.391912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.539 [2024-12-13 05:51:47.392080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.539 [2024-12-13 05:51:47.392088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.539 [2024-12-13 05:51:47.392094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.539 [2024-12-13 05:51:47.392100] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.539 [2024-12-13 05:51:47.404199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.539 [2024-12-13 05:51:47.404614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.539 [2024-12-13 05:51:47.404630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.539 [2024-12-13 05:51:47.404636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.539 [2024-12-13 05:51:47.404803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.539 [2024-12-13 05:51:47.404971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.539 [2024-12-13 05:51:47.404979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.539 [2024-12-13 05:51:47.404985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.539 [2024-12-13 05:51:47.404991] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.539 [2024-12-13 05:51:47.416942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.539 [2024-12-13 05:51:47.417360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.539 [2024-12-13 05:51:47.417376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.539 [2024-12-13 05:51:47.417382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.539 [2024-12-13 05:51:47.417567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.539 [2024-12-13 05:51:47.417736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.539 [2024-12-13 05:51:47.417743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.539 [2024-12-13 05:51:47.417749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.539 [2024-12-13 05:51:47.417756] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.539 [2024-12-13 05:51:47.429777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.539 [2024-12-13 05:51:47.430131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.539 [2024-12-13 05:51:47.430147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.539 [2024-12-13 05:51:47.430154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.539 [2024-12-13 05:51:47.430322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.539 [2024-12-13 05:51:47.430494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.539 [2024-12-13 05:51:47.430502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.539 [2024-12-13 05:51:47.430508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.539 [2024-12-13 05:51:47.430514] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.539 [2024-12-13 05:51:47.442890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.539 [2024-12-13 05:51:47.443328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.539 [2024-12-13 05:51:47.443346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.539 [2024-12-13 05:51:47.443357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.539 [2024-12-13 05:51:47.443538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.539 [2024-12-13 05:51:47.443711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.539 [2024-12-13 05:51:47.443719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.539 [2024-12-13 05:51:47.443726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.539 [2024-12-13 05:51:47.443732] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.539 [2024-12-13 05:51:47.455882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.539 [2024-12-13 05:51:47.456313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.539 [2024-12-13 05:51:47.456329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.539 [2024-12-13 05:51:47.456336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.539 [2024-12-13 05:51:47.456509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.539 [2024-12-13 05:51:47.456677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.539 [2024-12-13 05:51:47.456685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.539 [2024-12-13 05:51:47.456691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.539 [2024-12-13 05:51:47.456697] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.539 [2024-12-13 05:51:47.468801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.539 [2024-12-13 05:51:47.469251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.539 [2024-12-13 05:51:47.469267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.539 [2024-12-13 05:51:47.469274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.539 [2024-12-13 05:51:47.469442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.539 [2024-12-13 05:51:47.469616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.539 [2024-12-13 05:51:47.469625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.539 [2024-12-13 05:51:47.469631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.539 [2024-12-13 05:51:47.469637] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.539 [2024-12-13 05:51:47.481660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.539 [2024-12-13 05:51:47.482080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.539 [2024-12-13 05:51:47.482095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.539 [2024-12-13 05:51:47.482102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.539 [2024-12-13 05:51:47.482261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.539 [2024-12-13 05:51:47.482423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.539 [2024-12-13 05:51:47.482431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.539 [2024-12-13 05:51:47.482436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.539 [2024-12-13 05:51:47.482442] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.539 [2024-12-13 05:51:47.494385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.539 [2024-12-13 05:51:47.494776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.539 [2024-12-13 05:51:47.494792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.539 [2024-12-13 05:51:47.494798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.539 [2024-12-13 05:51:47.494957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.539 [2024-12-13 05:51:47.495116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.539 [2024-12-13 05:51:47.495123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.539 [2024-12-13 05:51:47.495129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.539 [2024-12-13 05:51:47.495135] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.539 [2024-12-13 05:51:47.507184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.539 [2024-12-13 05:51:47.507575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.539 [2024-12-13 05:51:47.507591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.539 [2024-12-13 05:51:47.507598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.539 [2024-12-13 05:51:47.507757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.539 [2024-12-13 05:51:47.507916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.539 [2024-12-13 05:51:47.507923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.539 [2024-12-13 05:51:47.507929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.539 [2024-12-13 05:51:47.507935] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.539 [2024-12-13 05:51:47.519971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.540 [2024-12-13 05:51:47.520391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.540 [2024-12-13 05:51:47.520406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.540 [2024-12-13 05:51:47.520412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.540 [2024-12-13 05:51:47.520597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.540 [2024-12-13 05:51:47.520765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.540 [2024-12-13 05:51:47.520773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.540 [2024-12-13 05:51:47.520782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.540 [2024-12-13 05:51:47.520789] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.540 [2024-12-13 05:51:47.532810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.540 [2024-12-13 05:51:47.533158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.540 [2024-12-13 05:51:47.533174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.540 [2024-12-13 05:51:47.533181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.540 [2024-12-13 05:51:47.533349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.540 [2024-12-13 05:51:47.533522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.540 [2024-12-13 05:51:47.533530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.540 [2024-12-13 05:51:47.533537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.540 [2024-12-13 05:51:47.533543] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.836 [2024-12-13 05:51:47.545776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.836 [2024-12-13 05:51:47.546184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.836 [2024-12-13 05:51:47.546201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.836 [2024-12-13 05:51:47.546208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.836 [2024-12-13 05:51:47.546381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.836 [2024-12-13 05:51:47.546559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.836 [2024-12-13 05:51:47.546568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.836 [2024-12-13 05:51:47.546574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.836 [2024-12-13 05:51:47.546580] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.836 [2024-12-13 05:51:47.558797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.836 [2024-12-13 05:51:47.559226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.836 [2024-12-13 05:51:47.559243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.836 [2024-12-13 05:51:47.559251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.836 [2024-12-13 05:51:47.559424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.836 [2024-12-13 05:51:47.559601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.836 [2024-12-13 05:51:47.559610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.836 [2024-12-13 05:51:47.559617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.836 [2024-12-13 05:51:47.559623] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.836 [2024-12-13 05:51:47.571839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.836 [2024-12-13 05:51:47.572269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.836 [2024-12-13 05:51:47.572285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.836 [2024-12-13 05:51:47.572293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.836 [2024-12-13 05:51:47.572471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.836 [2024-12-13 05:51:47.572644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.836 [2024-12-13 05:51:47.572653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.836 [2024-12-13 05:51:47.572659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.836 [2024-12-13 05:51:47.572665] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.836 [2024-12-13 05:51:47.584939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.836 [2024-12-13 05:51:47.585235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.836 [2024-12-13 05:51:47.585251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.836 [2024-12-13 05:51:47.585258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.836 [2024-12-13 05:51:47.585432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.836 [2024-12-13 05:51:47.585610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.836 [2024-12-13 05:51:47.585618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.836 [2024-12-13 05:51:47.585625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.836 [2024-12-13 05:51:47.585631] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.836 [2024-12-13 05:51:47.597869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.836 [2024-12-13 05:51:47.598231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.836 [2024-12-13 05:51:47.598275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.836 [2024-12-13 05:51:47.598298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.836 [2024-12-13 05:51:47.598894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.836 [2024-12-13 05:51:47.599338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.836 [2024-12-13 05:51:47.599348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.836 [2024-12-13 05:51:47.599354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.836 [2024-12-13 05:51:47.599360] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.836 [2024-12-13 05:51:47.610802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.836 [2024-12-13 05:51:47.611160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.836 [2024-12-13 05:51:47.611177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.836 [2024-12-13 05:51:47.611188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.836 [2024-12-13 05:51:47.611360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.836 [2024-12-13 05:51:47.611541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.836 [2024-12-13 05:51:47.611550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.836 [2024-12-13 05:51:47.611556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.836 [2024-12-13 05:51:47.611563] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.836 [2024-12-13 05:51:47.623797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.836 [2024-12-13 05:51:47.624148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.836 [2024-12-13 05:51:47.624163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.836 [2024-12-13 05:51:47.624171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.836 [2024-12-13 05:51:47.624343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.836 [2024-12-13 05:51:47.624521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.836 [2024-12-13 05:51:47.624530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.836 [2024-12-13 05:51:47.624536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.836 [2024-12-13 05:51:47.624542] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.836 9592.67 IOPS, 37.47 MiB/s [2024-12-13T04:51:47.851Z] [2024-12-13 05:51:47.637879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.836 [2024-12-13 05:51:47.638247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.836 [2024-12-13 05:51:47.638264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.836 [2024-12-13 05:51:47.638271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.836 [2024-12-13 05:51:47.638444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.836 [2024-12-13 05:51:47.638622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.836 [2024-12-13 05:51:47.638631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.836 [2024-12-13 05:51:47.638637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.836 [2024-12-13 05:51:47.638643] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.836 [2024-12-13 05:51:47.650855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.836 [2024-12-13 05:51:47.651193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.836 [2024-12-13 05:51:47.651209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.837 [2024-12-13 05:51:47.651216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.837 [2024-12-13 05:51:47.651388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.837 [2024-12-13 05:51:47.651568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.837 [2024-12-13 05:51:47.651576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.837 [2024-12-13 05:51:47.651583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.837 [2024-12-13 05:51:47.651589] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.837 [2024-12-13 05:51:47.663821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.837 [2024-12-13 05:51:47.664198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.837 [2024-12-13 05:51:47.664214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.837 [2024-12-13 05:51:47.664222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.837 [2024-12-13 05:51:47.664394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.837 [2024-12-13 05:51:47.664573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.837 [2024-12-13 05:51:47.664582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.837 [2024-12-13 05:51:47.664588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.837 [2024-12-13 05:51:47.664594] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.837 [2024-12-13 05:51:47.676813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.837 [2024-12-13 05:51:47.677170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.837 [2024-12-13 05:51:47.677186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.837 [2024-12-13 05:51:47.677193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.837 [2024-12-13 05:51:47.677361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.837 [2024-12-13 05:51:47.677532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.837 [2024-12-13 05:51:47.677541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.837 [2024-12-13 05:51:47.677547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.837 [2024-12-13 05:51:47.677553] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.837 [2024-12-13 05:51:47.689650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.837 [2024-12-13 05:51:47.690016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.837 [2024-12-13 05:51:47.690032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.837 [2024-12-13 05:51:47.690039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.837 [2024-12-13 05:51:47.690211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.837 [2024-12-13 05:51:47.690383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.837 [2024-12-13 05:51:47.690391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.837 [2024-12-13 05:51:47.690401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.837 [2024-12-13 05:51:47.690407] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.837 [2024-12-13 05:51:47.702579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.837 [2024-12-13 05:51:47.702869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.837 [2024-12-13 05:51:47.702885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.837 [2024-12-13 05:51:47.702892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.837 [2024-12-13 05:51:47.703060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.837 [2024-12-13 05:51:47.703227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.837 [2024-12-13 05:51:47.703235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.837 [2024-12-13 05:51:47.703241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.837 [2024-12-13 05:51:47.703247] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.837 [2024-12-13 05:51:47.715596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.837 [2024-12-13 05:51:47.715906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.837 [2024-12-13 05:51:47.715922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.837 [2024-12-13 05:51:47.715930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.837 [2024-12-13 05:51:47.716097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.837 [2024-12-13 05:51:47.716265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.837 [2024-12-13 05:51:47.716273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.837 [2024-12-13 05:51:47.716279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.837 [2024-12-13 05:51:47.716285] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.837 [2024-12-13 05:51:47.728418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.837 [2024-12-13 05:51:47.728717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.837 [2024-12-13 05:51:47.728733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.837 [2024-12-13 05:51:47.728740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.837 [2024-12-13 05:51:47.728908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.837 [2024-12-13 05:51:47.729076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.837 [2024-12-13 05:51:47.729084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.837 [2024-12-13 05:51:47.729090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.837 [2024-12-13 05:51:47.729097] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.837 [2024-12-13 05:51:47.741499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.837 [2024-12-13 05:51:47.741787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.837 [2024-12-13 05:51:47.741803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.837 [2024-12-13 05:51:47.741810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.837 [2024-12-13 05:51:47.741983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.837 [2024-12-13 05:51:47.742158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.837 [2024-12-13 05:51:47.742167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.837 [2024-12-13 05:51:47.742173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.837 [2024-12-13 05:51:47.742179] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.837 [2024-12-13 05:51:47.754558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.837 [2024-12-13 05:51:47.754911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.837 [2024-12-13 05:51:47.754927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.837 [2024-12-13 05:51:47.754934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.837 [2024-12-13 05:51:47.755107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.837 [2024-12-13 05:51:47.755280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.837 [2024-12-13 05:51:47.755288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.837 [2024-12-13 05:51:47.755294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.837 [2024-12-13 05:51:47.755300] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.837 [2024-12-13 05:51:47.767674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.837 [2024-12-13 05:51:47.768080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.837 [2024-12-13 05:51:47.768096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.837 [2024-12-13 05:51:47.768103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.837 [2024-12-13 05:51:47.768275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.837 [2024-12-13 05:51:47.768454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.837 [2024-12-13 05:51:47.768462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.837 [2024-12-13 05:51:47.768469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.837 [2024-12-13 05:51:47.768475] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.838 [2024-12-13 05:51:47.780743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.838 [2024-12-13 05:51:47.781174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.838 [2024-12-13 05:51:47.781191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.838 [2024-12-13 05:51:47.781203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.838 [2024-12-13 05:51:47.781377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.838 [2024-12-13 05:51:47.781558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.838 [2024-12-13 05:51:47.781566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.838 [2024-12-13 05:51:47.781573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.838 [2024-12-13 05:51:47.781579] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.838 [2024-12-13 05:51:47.793950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.838 [2024-12-13 05:51:47.794399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.838 [2024-12-13 05:51:47.794416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.838 [2024-12-13 05:51:47.794424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.838 [2024-12-13 05:51:47.794612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.838 [2024-12-13 05:51:47.794796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.838 [2024-12-13 05:51:47.794804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.838 [2024-12-13 05:51:47.794811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.838 [2024-12-13 05:51:47.794818] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.838 [2024-12-13 05:51:47.807021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.838 [2024-12-13 05:51:47.807473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.838 [2024-12-13 05:51:47.807518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.838 [2024-12-13 05:51:47.807541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.838 [2024-12-13 05:51:47.808124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.838 [2024-12-13 05:51:47.808721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.838 [2024-12-13 05:51:47.808729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.838 [2024-12-13 05:51:47.808736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.838 [2024-12-13 05:51:47.808742] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.838 [2024-12-13 05:51:47.820137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.838 [2024-12-13 05:51:47.820564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.838 [2024-12-13 05:51:47.820581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.838 [2024-12-13 05:51:47.820588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.838 [2024-12-13 05:51:47.820761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.838 [2024-12-13 05:51:47.820937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.838 [2024-12-13 05:51:47.820945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.838 [2024-12-13 05:51:47.820951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.838 [2024-12-13 05:51:47.820958] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.838 [2024-12-13 05:51:47.833169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.838 [2024-12-13 05:51:47.833547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.838 [2024-12-13 05:51:47.833564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:47.838 [2024-12-13 05:51:47.833571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:47.838 [2024-12-13 05:51:47.833744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:47.838 [2024-12-13 05:51:47.833917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.838 [2024-12-13 05:51:47.833925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.838 [2024-12-13 05:51:47.833931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.838 [2024-12-13 05:51:47.833938] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.121 [2024-12-13 05:51:47.846153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.121 [2024-12-13 05:51:47.846427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.121 [2024-12-13 05:51:47.846444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.121 [2024-12-13 05:51:47.846456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.121 [2024-12-13 05:51:47.846629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.121 [2024-12-13 05:51:47.846801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.121 [2024-12-13 05:51:47.846809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.121 [2024-12-13 05:51:47.846816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.121 [2024-12-13 05:51:47.846823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.121 [2024-12-13 05:51:47.859195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.121 [2024-12-13 05:51:47.859490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.121 [2024-12-13 05:51:47.859507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.121 [2024-12-13 05:51:47.859515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.121 [2024-12-13 05:51:47.859687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.121 [2024-12-13 05:51:47.859859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.121 [2024-12-13 05:51:47.859868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.121 [2024-12-13 05:51:47.859877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.121 [2024-12-13 05:51:47.859884] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.121 [2024-12-13 05:51:47.872256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.121 [2024-12-13 05:51:47.872689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.121 [2024-12-13 05:51:47.872706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.121 [2024-12-13 05:51:47.872713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.121 [2024-12-13 05:51:47.872886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.121 [2024-12-13 05:51:47.873059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.121 [2024-12-13 05:51:47.873067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.121 [2024-12-13 05:51:47.873073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.121 [2024-12-13 05:51:47.873079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.121 [2024-12-13 05:51:47.885210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.121 [2024-12-13 05:51:47.885493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.121 [2024-12-13 05:51:47.885510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.121 [2024-12-13 05:51:47.885517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.121 [2024-12-13 05:51:47.885685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.121 [2024-12-13 05:51:47.885853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.121 [2024-12-13 05:51:47.885861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.122 [2024-12-13 05:51:47.885867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.122 [2024-12-13 05:51:47.885873] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.122 [2024-12-13 05:51:47.898205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.122 [2024-12-13 05:51:47.898564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.122 [2024-12-13 05:51:47.898581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.122 [2024-12-13 05:51:47.898588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.122 [2024-12-13 05:51:47.898755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.122 [2024-12-13 05:51:47.898923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.122 [2024-12-13 05:51:47.898931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.122 [2024-12-13 05:51:47.898937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.122 [2024-12-13 05:51:47.898943] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.122 [2024-12-13 05:51:47.911159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.122 [2024-12-13 05:51:47.911513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.122 [2024-12-13 05:51:47.911557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.122 [2024-12-13 05:51:47.911580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.122 [2024-12-13 05:51:47.911833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.122 [2024-12-13 05:51:47.911993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.122 [2024-12-13 05:51:47.912001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.122 [2024-12-13 05:51:47.912006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.122 [2024-12-13 05:51:47.912012] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.122 [2024-12-13 05:51:47.924018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.122 [2024-12-13 05:51:47.924401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.122 [2024-12-13 05:51:47.924417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.122 [2024-12-13 05:51:47.924424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.122 [2024-12-13 05:51:47.924596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.122 [2024-12-13 05:51:47.924764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.122 [2024-12-13 05:51:47.924772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.122 [2024-12-13 05:51:47.924778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.122 [2024-12-13 05:51:47.924784] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.122 [2024-12-13 05:51:47.936780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.122 [2024-12-13 05:51:47.937210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.122 [2024-12-13 05:51:47.937253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.122 [2024-12-13 05:51:47.937276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.122 [2024-12-13 05:51:47.937744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.122 [2024-12-13 05:51:47.937913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.122 [2024-12-13 05:51:47.937921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.122 [2024-12-13 05:51:47.937927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.122 [2024-12-13 05:51:47.937933] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.122 [2024-12-13 05:51:47.949552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.122 [2024-12-13 05:51:47.949925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.122 [2024-12-13 05:51:47.949941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.122 [2024-12-13 05:51:47.949952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.122 [2024-12-13 05:51:47.950124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.122 [2024-12-13 05:51:47.950297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.122 [2024-12-13 05:51:47.950305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.122 [2024-12-13 05:51:47.950311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.122 [2024-12-13 05:51:47.950317] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.122 [2024-12-13 05:51:47.962523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.122 [2024-12-13 05:51:47.962886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.122 [2024-12-13 05:51:47.962931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.122 [2024-12-13 05:51:47.962954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.122 [2024-12-13 05:51:47.963549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.122 [2024-12-13 05:51:47.964079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.122 [2024-12-13 05:51:47.964088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.122 [2024-12-13 05:51:47.964094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.122 [2024-12-13 05:51:47.964100] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.122 [2024-12-13 05:51:47.975489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.122 [2024-12-13 05:51:47.975752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.122 [2024-12-13 05:51:47.975768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.122 [2024-12-13 05:51:47.975775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.122 [2024-12-13 05:51:47.975943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.122 [2024-12-13 05:51:47.976111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.122 [2024-12-13 05:51:47.976119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.122 [2024-12-13 05:51:47.976124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.122 [2024-12-13 05:51:47.976130] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.122 [2024-12-13 05:51:47.988348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.122 [2024-12-13 05:51:47.988763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.122 [2024-12-13 05:51:47.988808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.122 [2024-12-13 05:51:47.988831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.122 [2024-12-13 05:51:47.989342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.122 [2024-12-13 05:51:47.989520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.122 [2024-12-13 05:51:47.989528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.122 [2024-12-13 05:51:47.989534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.122 [2024-12-13 05:51:47.989540] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.122 [2024-12-13 05:51:48.001131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.122 [2024-12-13 05:51:48.001480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.122 [2024-12-13 05:51:48.001497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.122 [2024-12-13 05:51:48.001504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.122 [2024-12-13 05:51:48.001671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.122 [2024-12-13 05:51:48.001840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.122 [2024-12-13 05:51:48.001848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.122 [2024-12-13 05:51:48.001853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.122 [2024-12-13 05:51:48.001860] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.122 [2024-12-13 05:51:48.013917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.122 [2024-12-13 05:51:48.014197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.122 [2024-12-13 05:51:48.014212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.122 [2024-12-13 05:51:48.014219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.123 [2024-12-13 05:51:48.014387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.123 [2024-12-13 05:51:48.014569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.123 [2024-12-13 05:51:48.014577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.123 [2024-12-13 05:51:48.014584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.123 [2024-12-13 05:51:48.014590] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.123 [2024-12-13 05:51:48.026784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.123 [2024-12-13 05:51:48.027108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.123 [2024-12-13 05:51:48.027123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.123 [2024-12-13 05:51:48.027130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.123 [2024-12-13 05:51:48.027288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.123 [2024-12-13 05:51:48.027453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.123 [2024-12-13 05:51:48.027461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.123 [2024-12-13 05:51:48.027469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.123 [2024-12-13 05:51:48.027476] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.123 [2024-12-13 05:51:48.039589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.123 [2024-12-13 05:51:48.039998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.123 [2024-12-13 05:51:48.040042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.123 [2024-12-13 05:51:48.040064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.123 [2024-12-13 05:51:48.040662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.123 [2024-12-13 05:51:48.041244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.123 [2024-12-13 05:51:48.041251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.123 [2024-12-13 05:51:48.041257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.123 [2024-12-13 05:51:48.041263] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.123 [2024-12-13 05:51:48.052357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.123 [2024-12-13 05:51:48.052808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.123 [2024-12-13 05:51:48.052824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.123 [2024-12-13 05:51:48.052831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.123 [2024-12-13 05:51:48.052999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.123 [2024-12-13 05:51:48.053169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.123 [2024-12-13 05:51:48.053178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.123 [2024-12-13 05:51:48.053184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.123 [2024-12-13 05:51:48.053190] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.123 [2024-12-13 05:51:48.065230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.123 [2024-12-13 05:51:48.065618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.123 [2024-12-13 05:51:48.065634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.123 [2024-12-13 05:51:48.065641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.123 [2024-12-13 05:51:48.065800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.123 [2024-12-13 05:51:48.065958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.123 [2024-12-13 05:51:48.065966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.123 [2024-12-13 05:51:48.065972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.123 [2024-12-13 05:51:48.065977] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.123 [2024-12-13 05:51:48.078010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.123 [2024-12-13 05:51:48.078434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.123 [2024-12-13 05:51:48.078489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.123 [2024-12-13 05:51:48.078512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.123 [2024-12-13 05:51:48.079095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.123 [2024-12-13 05:51:48.079483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.123 [2024-12-13 05:51:48.079491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.123 [2024-12-13 05:51:48.079498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.123 [2024-12-13 05:51:48.079504] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.123 [2024-12-13 05:51:48.090802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.123 [2024-12-13 05:51:48.091246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.123 [2024-12-13 05:51:48.091262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.123 [2024-12-13 05:51:48.091270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.123 [2024-12-13 05:51:48.091437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.123 [2024-12-13 05:51:48.091610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.123 [2024-12-13 05:51:48.091619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.123 [2024-12-13 05:51:48.091625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.123 [2024-12-13 05:51:48.091631] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.123 [2024-12-13 05:51:48.103796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.123 [2024-12-13 05:51:48.104136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.123 [2024-12-13 05:51:48.104181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.123 [2024-12-13 05:51:48.104203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.123 [2024-12-13 05:51:48.104802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.123 [2024-12-13 05:51:48.105364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.123 [2024-12-13 05:51:48.105372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.123 [2024-12-13 05:51:48.105378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.123 [2024-12-13 05:51:48.105384] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.123 [2024-12-13 05:51:48.116843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.123 [2024-12-13 05:51:48.117264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.123 [2024-12-13 05:51:48.117281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.123 [2024-12-13 05:51:48.117291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.123 [2024-12-13 05:51:48.117468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.123 [2024-12-13 05:51:48.117642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.123 [2024-12-13 05:51:48.117650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.123 [2024-12-13 05:51:48.117656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.123 [2024-12-13 05:51:48.117662] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.404 [2024-12-13 05:51:48.129874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.404 [2024-12-13 05:51:48.130276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.404 [2024-12-13 05:51:48.130292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.404 [2024-12-13 05:51:48.130299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.404 [2024-12-13 05:51:48.130477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.404 [2024-12-13 05:51:48.130649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.404 [2024-12-13 05:51:48.130657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.404 [2024-12-13 05:51:48.130663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.404 [2024-12-13 05:51:48.130669] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.404 [2024-12-13 05:51:48.142903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.404 [2024-12-13 05:51:48.143344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.404 [2024-12-13 05:51:48.143383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.405 [2024-12-13 05:51:48.143407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.405 [2024-12-13 05:51:48.144007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.405 [2024-12-13 05:51:48.144593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.405 [2024-12-13 05:51:48.144601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.405 [2024-12-13 05:51:48.144607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.405 [2024-12-13 05:51:48.144614] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.405 [2024-12-13 05:51:48.156001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.405 [2024-12-13 05:51:48.156431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.405 [2024-12-13 05:51:48.156452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.405 [2024-12-13 05:51:48.156460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.405 [2024-12-13 05:51:48.156633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.405 [2024-12-13 05:51:48.156808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.405 [2024-12-13 05:51:48.156816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.405 [2024-12-13 05:51:48.156822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.405 [2024-12-13 05:51:48.156828] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.405 [2024-12-13 05:51:48.169026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.405 [2024-12-13 05:51:48.169459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.405 [2024-12-13 05:51:48.169475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.405 [2024-12-13 05:51:48.169482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.405 [2024-12-13 05:51:48.169655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.405 [2024-12-13 05:51:48.169827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.405 [2024-12-13 05:51:48.169835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.405 [2024-12-13 05:51:48.169841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.405 [2024-12-13 05:51:48.169848] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.405 [2024-12-13 05:51:48.182030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.405 [2024-12-13 05:51:48.182460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.405 [2024-12-13 05:51:48.182476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.405 [2024-12-13 05:51:48.182483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.405 [2024-12-13 05:51:48.182650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.405 [2024-12-13 05:51:48.182818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.405 [2024-12-13 05:51:48.182826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.405 [2024-12-13 05:51:48.182832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.405 [2024-12-13 05:51:48.182838] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.405 [2024-12-13 05:51:48.195017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.405 [2024-12-13 05:51:48.195421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.405 [2024-12-13 05:51:48.195437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.405 [2024-12-13 05:51:48.195444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.405 [2024-12-13 05:51:48.195616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.405 [2024-12-13 05:51:48.195785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.405 [2024-12-13 05:51:48.195793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.405 [2024-12-13 05:51:48.195802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.405 [2024-12-13 05:51:48.195809] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.405 [2024-12-13 05:51:48.207830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.405 [2024-12-13 05:51:48.208285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.405 [2024-12-13 05:51:48.208330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.405 [2024-12-13 05:51:48.208354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.405 [2024-12-13 05:51:48.208784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.405 [2024-12-13 05:51:48.208958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.405 [2024-12-13 05:51:48.208967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.405 [2024-12-13 05:51:48.208973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.405 [2024-12-13 05:51:48.208979] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.405 [2024-12-13 05:51:48.220720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.405 [2024-12-13 05:51:48.221146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.405 [2024-12-13 05:51:48.221162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.405 [2024-12-13 05:51:48.221169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.405 [2024-12-13 05:51:48.221336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.405 [2024-12-13 05:51:48.221525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.405 [2024-12-13 05:51:48.221534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.405 [2024-12-13 05:51:48.221540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.405 [2024-12-13 05:51:48.221547] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.405 [2024-12-13 05:51:48.233723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.405 [2024-12-13 05:51:48.234029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.405 [2024-12-13 05:51:48.234045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.405 [2024-12-13 05:51:48.234052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.405 [2024-12-13 05:51:48.234219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.405 [2024-12-13 05:51:48.234386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.405 [2024-12-13 05:51:48.234394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.405 [2024-12-13 05:51:48.234400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.405 [2024-12-13 05:51:48.234406] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.405 [2024-12-13 05:51:48.246461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.405 [2024-12-13 05:51:48.246875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.405 [2024-12-13 05:51:48.246890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.405 [2024-12-13 05:51:48.246896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.405 [2024-12-13 05:51:48.247055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.405 [2024-12-13 05:51:48.247214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.405 [2024-12-13 05:51:48.247221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.405 [2024-12-13 05:51:48.247227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.405 [2024-12-13 05:51:48.247233] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.405 [2024-12-13 05:51:48.259276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.405 [2024-12-13 05:51:48.259612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.405 [2024-12-13 05:51:48.259628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.405 [2024-12-13 05:51:48.259635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.405 [2024-12-13 05:51:48.259802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.405 [2024-12-13 05:51:48.259970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.405 [2024-12-13 05:51:48.259978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.405 [2024-12-13 05:51:48.259984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.405 [2024-12-13 05:51:48.259990] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.405 [2024-12-13 05:51:48.272040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.405 [2024-12-13 05:51:48.272458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.405 [2024-12-13 05:51:48.272473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.406 [2024-12-13 05:51:48.272496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.406 [2024-12-13 05:51:48.272664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.406 [2024-12-13 05:51:48.272831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.406 [2024-12-13 05:51:48.272839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.406 [2024-12-13 05:51:48.272845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.406 [2024-12-13 05:51:48.272851] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.406 [2024-12-13 05:51:48.284892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.406 [2024-12-13 05:51:48.285306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.406 [2024-12-13 05:51:48.285321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.406 [2024-12-13 05:51:48.285331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.406 [2024-12-13 05:51:48.285512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.406 [2024-12-13 05:51:48.285681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.406 [2024-12-13 05:51:48.285689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.406 [2024-12-13 05:51:48.285695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.406 [2024-12-13 05:51:48.285701] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.406 [2024-12-13 05:51:48.297759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.406 [2024-12-13 05:51:48.298170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.406 [2024-12-13 05:51:48.298186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.406 [2024-12-13 05:51:48.298193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.406 [2024-12-13 05:51:48.298351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.406 [2024-12-13 05:51:48.298532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.406 [2024-12-13 05:51:48.298541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.406 [2024-12-13 05:51:48.298547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.406 [2024-12-13 05:51:48.298553] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.406 [2024-12-13 05:51:48.310597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.406 [2024-12-13 05:51:48.310897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.406 [2024-12-13 05:51:48.310916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.406 [2024-12-13 05:51:48.310923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.406 [2024-12-13 05:51:48.311081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.406 [2024-12-13 05:51:48.311240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.406 [2024-12-13 05:51:48.311247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.406 [2024-12-13 05:51:48.311253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.406 [2024-12-13 05:51:48.311258] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.406 [2024-12-13 05:51:48.323463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.406 [2024-12-13 05:51:48.323902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.406 [2024-12-13 05:51:48.323918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.406 [2024-12-13 05:51:48.323925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.406 [2024-12-13 05:51:48.324092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.406 [2024-12-13 05:51:48.324263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.406 [2024-12-13 05:51:48.324271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.406 [2024-12-13 05:51:48.324278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.406 [2024-12-13 05:51:48.324284] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.406 [2024-12-13 05:51:48.336289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.406 [2024-12-13 05:51:48.336720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.406 [2024-12-13 05:51:48.336736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.406 [2024-12-13 05:51:48.336743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.406 [2024-12-13 05:51:48.336910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.406 [2024-12-13 05:51:48.337077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.406 [2024-12-13 05:51:48.337085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.406 [2024-12-13 05:51:48.337091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.406 [2024-12-13 05:51:48.337097] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.406 [2024-12-13 05:51:48.349149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.406 [2024-12-13 05:51:48.349595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.406 [2024-12-13 05:51:48.349611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.406 [2024-12-13 05:51:48.349618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.406 [2024-12-13 05:51:48.349794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.406 [2024-12-13 05:51:48.349953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.406 [2024-12-13 05:51:48.349960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.406 [2024-12-13 05:51:48.349966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.406 [2024-12-13 05:51:48.349972] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.406 [2024-12-13 05:51:48.361983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.406 [2024-12-13 05:51:48.362406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.406 [2024-12-13 05:51:48.362423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.406 [2024-12-13 05:51:48.362430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.406 [2024-12-13 05:51:48.362618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.406 [2024-12-13 05:51:48.362787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.406 [2024-12-13 05:51:48.362795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.406 [2024-12-13 05:51:48.362804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.406 [2024-12-13 05:51:48.362811] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.406 [2024-12-13 05:51:48.374862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.406 [2024-12-13 05:51:48.375298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.406 [2024-12-13 05:51:48.375344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.406 [2024-12-13 05:51:48.375368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.406 [2024-12-13 05:51:48.375927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.406 [2024-12-13 05:51:48.376096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.406 [2024-12-13 05:51:48.376104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.406 [2024-12-13 05:51:48.376110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.406 [2024-12-13 05:51:48.376116] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.406 [2024-12-13 05:51:48.387716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.406 [2024-12-13 05:51:48.388025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.406 [2024-12-13 05:51:48.388040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.406 [2024-12-13 05:51:48.388047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.406 [2024-12-13 05:51:48.388205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.406 [2024-12-13 05:51:48.388364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.406 [2024-12-13 05:51:48.388372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.406 [2024-12-13 05:51:48.388378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.406 [2024-12-13 05:51:48.388383] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.406 [2024-12-13 05:51:48.400666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.406 [2024-12-13 05:51:48.401092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.406 [2024-12-13 05:51:48.401108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.407 [2024-12-13 05:51:48.401115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.407 [2024-12-13 05:51:48.401287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.407 [2024-12-13 05:51:48.401469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.407 [2024-12-13 05:51:48.401478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.407 [2024-12-13 05:51:48.401484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.407 [2024-12-13 05:51:48.401491] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.691 [2024-12-13 05:51:48.413701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.691 [2024-12-13 05:51:48.414114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.691 [2024-12-13 05:51:48.414130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.691 [2024-12-13 05:51:48.414137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.691 [2024-12-13 05:51:48.414310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.691 [2024-12-13 05:51:48.414496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.691 [2024-12-13 05:51:48.414505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.691 [2024-12-13 05:51:48.414512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.691 [2024-12-13 05:51:48.414518] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.691 [2024-12-13 05:51:48.426747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.691 [2024-12-13 05:51:48.427159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.691 [2024-12-13 05:51:48.427175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.691 [2024-12-13 05:51:48.427182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.691 [2024-12-13 05:51:48.427355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.691 [2024-12-13 05:51:48.427534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.691 [2024-12-13 05:51:48.427543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.691 [2024-12-13 05:51:48.427549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.691 [2024-12-13 05:51:48.427555] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.691 [2024-12-13 05:51:48.439771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.691 [2024-12-13 05:51:48.440173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.691 [2024-12-13 05:51:48.440189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.691 [2024-12-13 05:51:48.440196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.691 [2024-12-13 05:51:48.440368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.691 [2024-12-13 05:51:48.440552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.691 [2024-12-13 05:51:48.440562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.691 [2024-12-13 05:51:48.440567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.691 [2024-12-13 05:51:48.440574] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.691 [2024-12-13 05:51:48.452692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.691 [2024-12-13 05:51:48.453087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.691 [2024-12-13 05:51:48.453102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.691 [2024-12-13 05:51:48.453113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.691 [2024-12-13 05:51:48.453280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.691 [2024-12-13 05:51:48.453456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.691 [2024-12-13 05:51:48.453466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.691 [2024-12-13 05:51:48.453472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.691 [2024-12-13 05:51:48.453478] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.691 [2024-12-13 05:51:48.465557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.691 [2024-12-13 05:51:48.465955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.691 [2024-12-13 05:51:48.465971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.691 [2024-12-13 05:51:48.465979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.691 [2024-12-13 05:51:48.466147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.691 [2024-12-13 05:51:48.466314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.691 [2024-12-13 05:51:48.466322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.691 [2024-12-13 05:51:48.466328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.691 [2024-12-13 05:51:48.466334] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.691 [2024-12-13 05:51:48.478501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.691 [2024-12-13 05:51:48.478853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.691 [2024-12-13 05:51:48.478869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.691 [2024-12-13 05:51:48.478876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.691 [2024-12-13 05:51:48.479044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.691 [2024-12-13 05:51:48.479211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.691 [2024-12-13 05:51:48.479219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.691 [2024-12-13 05:51:48.479225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.691 [2024-12-13 05:51:48.479230] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.691 [2024-12-13 05:51:48.491392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.691 [2024-12-13 05:51:48.491796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.692 [2024-12-13 05:51:48.491812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.692 [2024-12-13 05:51:48.491819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.692 [2024-12-13 05:51:48.491986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.692 [2024-12-13 05:51:48.492157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.692 [2024-12-13 05:51:48.492165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.692 [2024-12-13 05:51:48.492171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.692 [2024-12-13 05:51:48.492177] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.692 [2024-12-13 05:51:48.504147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.692 [2024-12-13 05:51:48.504515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.692 [2024-12-13 05:51:48.504531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.692 [2024-12-13 05:51:48.504538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.692 [2024-12-13 05:51:48.504697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.692 [2024-12-13 05:51:48.504855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.692 [2024-12-13 05:51:48.504863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.692 [2024-12-13 05:51:48.504869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.692 [2024-12-13 05:51:48.504875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.692 [2024-12-13 05:51:48.516898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.692 [2024-12-13 05:51:48.517333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.692 [2024-12-13 05:51:48.517349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.692 [2024-12-13 05:51:48.517356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.692 [2024-12-13 05:51:48.517531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.692 [2024-12-13 05:51:48.517699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.692 [2024-12-13 05:51:48.517707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.692 [2024-12-13 05:51:48.517713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.692 [2024-12-13 05:51:48.517720] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.692 [2024-12-13 05:51:48.529743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.692 [2024-12-13 05:51:48.530140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.692 [2024-12-13 05:51:48.530184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.692 [2024-12-13 05:51:48.530207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.692 [2024-12-13 05:51:48.530699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.692 [2024-12-13 05:51:48.530868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.692 [2024-12-13 05:51:48.530875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.692 [2024-12-13 05:51:48.530884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.692 [2024-12-13 05:51:48.530891] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.692 [2024-12-13 05:51:48.542480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.692 [2024-12-13 05:51:48.542889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.692 [2024-12-13 05:51:48.542905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.692 [2024-12-13 05:51:48.542912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.692 [2024-12-13 05:51:48.543079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.692 [2024-12-13 05:51:48.543247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.692 [2024-12-13 05:51:48.543255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.692 [2024-12-13 05:51:48.543260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.692 [2024-12-13 05:51:48.543266] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.692 [2024-12-13 05:51:48.555308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.692 [2024-12-13 05:51:48.555717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.692 [2024-12-13 05:51:48.555733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.692 [2024-12-13 05:51:48.555741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.692 [2024-12-13 05:51:48.555908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.692 [2024-12-13 05:51:48.556076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.692 [2024-12-13 05:51:48.556084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.692 [2024-12-13 05:51:48.556090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.692 [2024-12-13 05:51:48.556096] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.692 [2024-12-13 05:51:48.568156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.692 [2024-12-13 05:51:48.568476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.692 [2024-12-13 05:51:48.568491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.692 [2024-12-13 05:51:48.568498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.692 [2024-12-13 05:51:48.568656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.692 [2024-12-13 05:51:48.568813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.692 [2024-12-13 05:51:48.568821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.692 [2024-12-13 05:51:48.568826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.692 [2024-12-13 05:51:48.568832] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.692 [2024-12-13 05:51:48.581005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.692 [2024-12-13 05:51:48.581424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.692 [2024-12-13 05:51:48.581439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.692 [2024-12-13 05:51:48.581446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.692 [2024-12-13 05:51:48.581622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.692 [2024-12-13 05:51:48.581789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.692 [2024-12-13 05:51:48.581797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.692 [2024-12-13 05:51:48.581803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.692 [2024-12-13 05:51:48.581809] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.692 [2024-12-13 05:51:48.593847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.692 [2024-12-13 05:51:48.594273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.692 [2024-12-13 05:51:48.594316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.692 [2024-12-13 05:51:48.594338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.692 [2024-12-13 05:51:48.594745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.692 [2024-12-13 05:51:48.594915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.692 [2024-12-13 05:51:48.594922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.692 [2024-12-13 05:51:48.594929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.692 [2024-12-13 05:51:48.594935] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.692 [2024-12-13 05:51:48.606618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.692 [2024-12-13 05:51:48.607032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.692 [2024-12-13 05:51:48.607049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.692 [2024-12-13 05:51:48.607056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.692 [2024-12-13 05:51:48.607223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.693 [2024-12-13 05:51:48.607390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.693 [2024-12-13 05:51:48.607398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.693 [2024-12-13 05:51:48.607404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.693 [2024-12-13 05:51:48.607410] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.693 [2024-12-13 05:51:48.619540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.693 [2024-12-13 05:51:48.619954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.693 [2024-12-13 05:51:48.619971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.693 [2024-12-13 05:51:48.619981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.693 [2024-12-13 05:51:48.620154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.693 [2024-12-13 05:51:48.620330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.693 [2024-12-13 05:51:48.620338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.693 [2024-12-13 05:51:48.620345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.693 [2024-12-13 05:51:48.620351] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.693 [2024-12-13 05:51:48.632447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.693 [2024-12-13 05:51:48.632866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.693 [2024-12-13 05:51:48.632909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.693 [2024-12-13 05:51:48.632933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.693 [2024-12-13 05:51:48.633478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.693 [2024-12-13 05:51:48.633647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.693 [2024-12-13 05:51:48.633655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.693 [2024-12-13 05:51:48.633661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.693 [2024-12-13 05:51:48.633667] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.693 7194.50 IOPS, 28.10 MiB/s [2024-12-13T04:51:48.708Z] [2024-12-13 05:51:48.645248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.693 [2024-12-13 05:51:48.645661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.693 [2024-12-13 05:51:48.645678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.693 [2024-12-13 05:51:48.645686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.693 [2024-12-13 05:51:48.645854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.693 [2024-12-13 05:51:48.646022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.693 [2024-12-13 05:51:48.646031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.693 [2024-12-13 05:51:48.646037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.693 [2024-12-13 05:51:48.646044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.693 [2024-12-13 05:51:48.658113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.693 [2024-12-13 05:51:48.658569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.693 [2024-12-13 05:51:48.658586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.693 [2024-12-13 05:51:48.658593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.693 [2024-12-13 05:51:48.658760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.693 [2024-12-13 05:51:48.658931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.693 [2024-12-13 05:51:48.658939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.693 [2024-12-13 05:51:48.658945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.693 [2024-12-13 05:51:48.658951] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.693 [2024-12-13 05:51:48.670879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.693 [2024-12-13 05:51:48.671283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.693 [2024-12-13 05:51:48.671327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.693 [2024-12-13 05:51:48.671350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.693 [2024-12-13 05:51:48.671816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.693 [2024-12-13 05:51:48.671985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.693 [2024-12-13 05:51:48.671993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.693 [2024-12-13 05:51:48.671999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.693 [2024-12-13 05:51:48.672005] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.693 [2024-12-13 05:51:48.683706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.693 [2024-12-13 05:51:48.684119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.693 [2024-12-13 05:51:48.684135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.693 [2024-12-13 05:51:48.684143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.693 [2024-12-13 05:51:48.684310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.693 [2024-12-13 05:51:48.684484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.693 [2024-12-13 05:51:48.684492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.693 [2024-12-13 05:51:48.684499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.693 [2024-12-13 05:51:48.684505] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.693 [2024-12-13 05:51:48.696720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.981 [2024-12-13 05:51:48.697125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.981 [2024-12-13 05:51:48.697141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.982 [2024-12-13 05:51:48.697148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.982 [2024-12-13 05:51:48.697321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.982 [2024-12-13 05:51:48.697501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.982 [2024-12-13 05:51:48.697510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.982 [2024-12-13 05:51:48.697519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.982 [2024-12-13 05:51:48.697526] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.982 [2024-12-13 05:51:48.709666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.982 [2024-12-13 05:51:48.710023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.982 [2024-12-13 05:51:48.710040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.982 [2024-12-13 05:51:48.710047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.982 [2024-12-13 05:51:48.710220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.982 [2024-12-13 05:51:48.710392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.982 [2024-12-13 05:51:48.710400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.982 [2024-12-13 05:51:48.710406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.982 [2024-12-13 05:51:48.710412] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.982 [2024-12-13 05:51:48.722783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.982 [2024-12-13 05:51:48.723198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.982 [2024-12-13 05:51:48.723215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.982 [2024-12-13 05:51:48.723222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.982 [2024-12-13 05:51:48.723394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.982 [2024-12-13 05:51:48.723576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.982 [2024-12-13 05:51:48.723585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.982 [2024-12-13 05:51:48.723591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.982 [2024-12-13 05:51:48.723597] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.982 [2024-12-13 05:51:48.735854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.982 [2024-12-13 05:51:48.736180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.982 [2024-12-13 05:51:48.736196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.982 [2024-12-13 05:51:48.736204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.982 [2024-12-13 05:51:48.736376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.982 [2024-12-13 05:51:48.736555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.982 [2024-12-13 05:51:48.736563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.982 [2024-12-13 05:51:48.736569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.982 [2024-12-13 05:51:48.736576] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.982 [2024-12-13 05:51:48.748838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.982 [2024-12-13 05:51:48.749302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.982 [2024-12-13 05:51:48.749317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.982 [2024-12-13 05:51:48.749325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.982 [2024-12-13 05:51:48.749503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.982 [2024-12-13 05:51:48.749685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.982 [2024-12-13 05:51:48.749693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.982 [2024-12-13 05:51:48.749699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.982 [2024-12-13 05:51:48.749705] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.982 [2024-12-13 05:51:48.761606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.982 [2024-12-13 05:51:48.762046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.982 [2024-12-13 05:51:48.762084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.982 [2024-12-13 05:51:48.762111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.982 [2024-12-13 05:51:48.762710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.982 [2024-12-13 05:51:48.763237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.982 [2024-12-13 05:51:48.763245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.982 [2024-12-13 05:51:48.763251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.982 [2024-12-13 05:51:48.763258] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.982 [2024-12-13 05:51:48.776703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.982 [2024-12-13 05:51:48.777274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.982 [2024-12-13 05:51:48.777318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.982 [2024-12-13 05:51:48.777342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.982 [2024-12-13 05:51:48.777939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.982 [2024-12-13 05:51:48.778397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.982 [2024-12-13 05:51:48.778408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.982 [2024-12-13 05:51:48.778418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.982 [2024-12-13 05:51:48.778427] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.982 [2024-12-13 05:51:48.789665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.982 [2024-12-13 05:51:48.790043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.982 [2024-12-13 05:51:48.790059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.982 [2024-12-13 05:51:48.790070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.982 [2024-12-13 05:51:48.790238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.982 [2024-12-13 05:51:48.790405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.982 [2024-12-13 05:51:48.790412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.982 [2024-12-13 05:51:48.790418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.982 [2024-12-13 05:51:48.790424] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.982 [2024-12-13 05:51:48.802503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.982 [2024-12-13 05:51:48.802918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.982 [2024-12-13 05:51:48.802934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.982 [2024-12-13 05:51:48.802941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.982 [2024-12-13 05:51:48.803108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.982 [2024-12-13 05:51:48.803275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.982 [2024-12-13 05:51:48.803283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.982 [2024-12-13 05:51:48.803289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.982 [2024-12-13 05:51:48.803295] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.982 [2024-12-13 05:51:48.815340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.983 [2024-12-13 05:51:48.815760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.983 [2024-12-13 05:51:48.815776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.983 [2024-12-13 05:51:48.815783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.983 [2024-12-13 05:51:48.815951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.983 [2024-12-13 05:51:48.816118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.983 [2024-12-13 05:51:48.816126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.983 [2024-12-13 05:51:48.816132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.983 [2024-12-13 05:51:48.816138] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.983 [2024-12-13 05:51:48.828188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.983 [2024-12-13 05:51:48.828601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.983 [2024-12-13 05:51:48.828618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.983 [2024-12-13 05:51:48.828625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.983 [2024-12-13 05:51:48.828793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.983 [2024-12-13 05:51:48.828964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.983 [2024-12-13 05:51:48.828972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.983 [2024-12-13 05:51:48.828978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.983 [2024-12-13 05:51:48.828984] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.983 [2024-12-13 05:51:48.841018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.983 [2024-12-13 05:51:48.841409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.983 [2024-12-13 05:51:48.841424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.983 [2024-12-13 05:51:48.841431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.983 [2024-12-13 05:51:48.841618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.983 [2024-12-13 05:51:48.841787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.983 [2024-12-13 05:51:48.841794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.983 [2024-12-13 05:51:48.841800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.983 [2024-12-13 05:51:48.841806] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.983 [2024-12-13 05:51:48.853840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.983 [2024-12-13 05:51:48.854222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.983 [2024-12-13 05:51:48.854237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.983 [2024-12-13 05:51:48.854244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.983 [2024-12-13 05:51:48.854402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.983 [2024-12-13 05:51:48.854589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.983 [2024-12-13 05:51:48.854597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.983 [2024-12-13 05:51:48.854603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.983 [2024-12-13 05:51:48.854609] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.983 [2024-12-13 05:51:48.866634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.983 [2024-12-13 05:51:48.867038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.983 [2024-12-13 05:51:48.867082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.983 [2024-12-13 05:51:48.867105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.983 [2024-12-13 05:51:48.867702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.983 [2024-12-13 05:51:48.868291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.983 [2024-12-13 05:51:48.868317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.983 [2024-12-13 05:51:48.868326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.983 [2024-12-13 05:51:48.868333] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.983 [2024-12-13 05:51:48.879405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.983 [2024-12-13 05:51:48.879804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.983 [2024-12-13 05:51:48.879820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.983 [2024-12-13 05:51:48.879827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.983 [2024-12-13 05:51:48.879985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.983 [2024-12-13 05:51:48.880144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.983 [2024-12-13 05:51:48.880151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.983 [2024-12-13 05:51:48.880157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.983 [2024-12-13 05:51:48.880163] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.983 [2024-12-13 05:51:48.892197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.983 [2024-12-13 05:51:48.892533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.983 [2024-12-13 05:51:48.892583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.983 [2024-12-13 05:51:48.892607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.983 [2024-12-13 05:51:48.893147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.983 [2024-12-13 05:51:48.893315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.983 [2024-12-13 05:51:48.893323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.983 [2024-12-13 05:51:48.893329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.983 [2024-12-13 05:51:48.893335] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.983 [2024-12-13 05:51:48.904946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.983 [2024-12-13 05:51:48.905353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.983 [2024-12-13 05:51:48.905398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.983 [2024-12-13 05:51:48.905421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.983 [2024-12-13 05:51:48.906015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.983 [2024-12-13 05:51:48.906479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.983 [2024-12-13 05:51:48.906488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.983 [2024-12-13 05:51:48.906494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.983 [2024-12-13 05:51:48.906500] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.983 [2024-12-13 05:51:48.917881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.983 [2024-12-13 05:51:48.918222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.983 [2024-12-13 05:51:48.918238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.983 [2024-12-13 05:51:48.918245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.983 [2024-12-13 05:51:48.918412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.983 [2024-12-13 05:51:48.918586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.983 [2024-12-13 05:51:48.918594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.983 [2024-12-13 05:51:48.918600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.983 [2024-12-13 05:51:48.918606] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.983 [2024-12-13 05:51:48.930672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.983 [2024-12-13 05:51:48.931004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.983 [2024-12-13 05:51:48.931020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.983 [2024-12-13 05:51:48.931027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.983 [2024-12-13 05:51:48.931195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.983 [2024-12-13 05:51:48.931363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.983 [2024-12-13 05:51:48.931371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.983 [2024-12-13 05:51:48.931377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.984 [2024-12-13 05:51:48.931383] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.984 [2024-12-13 05:51:48.943444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.984 [2024-12-13 05:51:48.943859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.984 [2024-12-13 05:51:48.943874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.984 [2024-12-13 05:51:48.943881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.984 [2024-12-13 05:51:48.944049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.984 [2024-12-13 05:51:48.944216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.984 [2024-12-13 05:51:48.944224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.984 [2024-12-13 05:51:48.944230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.984 [2024-12-13 05:51:48.944236] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.984 [2024-12-13 05:51:48.956266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.984 [2024-12-13 05:51:48.956676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.984 [2024-12-13 05:51:48.956692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.984 [2024-12-13 05:51:48.956702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.984 [2024-12-13 05:51:48.956869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.984 [2024-12-13 05:51:48.957037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.984 [2024-12-13 05:51:48.957045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.984 [2024-12-13 05:51:48.957051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.984 [2024-12-13 05:51:48.957057] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.984 [2024-12-13 05:51:48.969096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.984 [2024-12-13 05:51:48.969491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.984 [2024-12-13 05:51:48.969508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.984 [2024-12-13 05:51:48.969516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.984 [2024-12-13 05:51:48.969683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.984 [2024-12-13 05:51:48.969851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.984 [2024-12-13 05:51:48.969859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.984 [2024-12-13 05:51:48.969865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.984 [2024-12-13 05:51:48.969871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.984 [2024-12-13 05:51:48.982174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.984 [2024-12-13 05:51:48.982586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.984 [2024-12-13 05:51:48.982603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:48.984 [2024-12-13 05:51:48.982610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:48.984 [2024-12-13 05:51:48.982783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:48.984 [2024-12-13 05:51:48.982956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.984 [2024-12-13 05:51:48.982964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.984 [2024-12-13 05:51:48.982970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.984 [2024-12-13 05:51:48.982977] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.278 [2024-12-13 05:51:48.995179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.278 [2024-12-13 05:51:48.995604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.278 [2024-12-13 05:51:48.995621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.278 [2024-12-13 05:51:48.995628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.278 [2024-12-13 05:51:48.995801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.278 [2024-12-13 05:51:48.995977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.278 [2024-12-13 05:51:48.995985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.278 [2024-12-13 05:51:48.995991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.278 [2024-12-13 05:51:48.995997] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.279 [2024-12-13 05:51:49.008206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.279 [2024-12-13 05:51:49.008634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.279 [2024-12-13 05:51:49.008651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.279 [2024-12-13 05:51:49.008659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.279 [2024-12-13 05:51:49.008831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.279 [2024-12-13 05:51:49.009003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.279 [2024-12-13 05:51:49.009011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.279 [2024-12-13 05:51:49.009018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.279 [2024-12-13 05:51:49.009024] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.279 [2024-12-13 05:51:49.021229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.279 [2024-12-13 05:51:49.021647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.279 [2024-12-13 05:51:49.021664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.279 [2024-12-13 05:51:49.021671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.279 [2024-12-13 05:51:49.021843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.279 [2024-12-13 05:51:49.022017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.279 [2024-12-13 05:51:49.022025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.279 [2024-12-13 05:51:49.022031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.279 [2024-12-13 05:51:49.022037] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.279 [2024-12-13 05:51:49.034249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.279 [2024-12-13 05:51:49.034598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.279 [2024-12-13 05:51:49.034615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.279 [2024-12-13 05:51:49.034623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.279 [2024-12-13 05:51:49.034795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.279 [2024-12-13 05:51:49.034967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.279 [2024-12-13 05:51:49.034975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.279 [2024-12-13 05:51:49.034986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.279 [2024-12-13 05:51:49.034993] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.279 [2024-12-13 05:51:49.047359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.279 [2024-12-13 05:51:49.047772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.279 [2024-12-13 05:51:49.047789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.279 [2024-12-13 05:51:49.047796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.279 [2024-12-13 05:51:49.047963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.279 [2024-12-13 05:51:49.048130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.279 [2024-12-13 05:51:49.048138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.279 [2024-12-13 05:51:49.048144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.279 [2024-12-13 05:51:49.048150] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.279 [2024-12-13 05:51:49.060195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.279 [2024-12-13 05:51:49.060579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.279 [2024-12-13 05:51:49.060596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.279 [2024-12-13 05:51:49.060603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.279 [2024-12-13 05:51:49.060772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.279 [2024-12-13 05:51:49.060940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.279 [2024-12-13 05:51:49.060948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.279 [2024-12-13 05:51:49.060954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.279 [2024-12-13 05:51:49.060960] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.279 [2024-12-13 05:51:49.073068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.279 [2024-12-13 05:51:49.073428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.279 [2024-12-13 05:51:49.073444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.279 [2024-12-13 05:51:49.073456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.279 [2024-12-13 05:51:49.073624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.279 [2024-12-13 05:51:49.073792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.279 [2024-12-13 05:51:49.073800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.279 [2024-12-13 05:51:49.073805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.279 [2024-12-13 05:51:49.073812] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.279 [2024-12-13 05:51:49.085882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.279 [2024-12-13 05:51:49.086304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.279 [2024-12-13 05:51:49.086348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.279 [2024-12-13 05:51:49.086371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.279 [2024-12-13 05:51:49.086835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.279 [2024-12-13 05:51:49.087003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.279 [2024-12-13 05:51:49.087011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.279 [2024-12-13 05:51:49.087017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.279 [2024-12-13 05:51:49.087023] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.279 [2024-12-13 05:51:49.098724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.279 [2024-12-13 05:51:49.099169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.279 [2024-12-13 05:51:49.099185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.279 [2024-12-13 05:51:49.099192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.279 [2024-12-13 05:51:49.099359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.279 [2024-12-13 05:51:49.099533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.279 [2024-12-13 05:51:49.099542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.279 [2024-12-13 05:51:49.099548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.279 [2024-12-13 05:51:49.099554] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.279 [2024-12-13 05:51:49.111472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.279 [2024-12-13 05:51:49.111906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.279 [2024-12-13 05:51:49.111923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.279 [2024-12-13 05:51:49.111930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.279 [2024-12-13 05:51:49.112097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.279 [2024-12-13 05:51:49.112264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.279 [2024-12-13 05:51:49.112272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.279 [2024-12-13 05:51:49.112278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.279 [2024-12-13 05:51:49.112284] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.279 [2024-12-13 05:51:49.124264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.279 [2024-12-13 05:51:49.124646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.279 [2024-12-13 05:51:49.124662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.279 [2024-12-13 05:51:49.124672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.279 [2024-12-13 05:51:49.124845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.279 [2024-12-13 05:51:49.125018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.279 [2024-12-13 05:51:49.125026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.279 [2024-12-13 05:51:49.125032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.279 [2024-12-13 05:51:49.125039] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.280 [2024-12-13 05:51:49.137215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.280 [2024-12-13 05:51:49.137677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.280 [2024-12-13 05:51:49.137692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.280 [2024-12-13 05:51:49.137700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.280 [2024-12-13 05:51:49.137859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.280 [2024-12-13 05:51:49.138017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.280 [2024-12-13 05:51:49.138025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.280 [2024-12-13 05:51:49.138031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.280 [2024-12-13 05:51:49.138036] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.280 [2024-12-13 05:51:49.150085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.280 [2024-12-13 05:51:49.150522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.280 [2024-12-13 05:51:49.150539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.280 [2024-12-13 05:51:49.150546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.280 [2024-12-13 05:51:49.150714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.280 [2024-12-13 05:51:49.150881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.280 [2024-12-13 05:51:49.150889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.280 [2024-12-13 05:51:49.150895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.280 [2024-12-13 05:51:49.150901] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.280 [2024-12-13 05:51:49.162818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.280 [2024-12-13 05:51:49.163216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.280 [2024-12-13 05:51:49.163232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.280 [2024-12-13 05:51:49.163239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.280 [2024-12-13 05:51:49.163407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.280 [2024-12-13 05:51:49.163585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.280 [2024-12-13 05:51:49.163593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.280 [2024-12-13 05:51:49.163599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.280 [2024-12-13 05:51:49.163605] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.280 [2024-12-13 05:51:49.175687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.280 [2024-12-13 05:51:49.176048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.280 [2024-12-13 05:51:49.176090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.280 [2024-12-13 05:51:49.176113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.280 [2024-12-13 05:51:49.176713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.280 [2024-12-13 05:51:49.177190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.280 [2024-12-13 05:51:49.177199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.280 [2024-12-13 05:51:49.177205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.280 [2024-12-13 05:51:49.177211] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.280 [2024-12-13 05:51:49.188472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.280 [2024-12-13 05:51:49.188841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.280 [2024-12-13 05:51:49.188857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.280 [2024-12-13 05:51:49.188864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.280 [2024-12-13 05:51:49.189032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.280 [2024-12-13 05:51:49.189199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.280 [2024-12-13 05:51:49.189206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.280 [2024-12-13 05:51:49.189212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.280 [2024-12-13 05:51:49.189218] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.280 [2024-12-13 05:51:49.201520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.280 [2024-12-13 05:51:49.201853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.280 [2024-12-13 05:51:49.201896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.280 [2024-12-13 05:51:49.201918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.280 [2024-12-13 05:51:49.202370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.280 [2024-12-13 05:51:49.202546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.280 [2024-12-13 05:51:49.202554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.280 [2024-12-13 05:51:49.202563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.280 [2024-12-13 05:51:49.202570] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.280 [2024-12-13 05:51:49.214342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.280 [2024-12-13 05:51:49.214761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.280 [2024-12-13 05:51:49.214777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.280 [2024-12-13 05:51:49.214784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.280 [2024-12-13 05:51:49.214952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.280 [2024-12-13 05:51:49.215119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.280 [2024-12-13 05:51:49.215127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.280 [2024-12-13 05:51:49.215133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.280 [2024-12-13 05:51:49.215139] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.280 [2024-12-13 05:51:49.227343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.280 [2024-12-13 05:51:49.227773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.280 [2024-12-13 05:51:49.227817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.280 [2024-12-13 05:51:49.227840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.280 [2024-12-13 05:51:49.228365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.280 [2024-12-13 05:51:49.228744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.280 [2024-12-13 05:51:49.228761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.280 [2024-12-13 05:51:49.228773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.280 [2024-12-13 05:51:49.228786] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.280 [2024-12-13 05:51:49.241876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.280 [2024-12-13 05:51:49.242360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.280 [2024-12-13 05:51:49.242381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.280 [2024-12-13 05:51:49.242391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.280 [2024-12-13 05:51:49.242643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.280 [2024-12-13 05:51:49.242896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.280 [2024-12-13 05:51:49.242909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.280 [2024-12-13 05:51:49.242918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.280 [2024-12-13 05:51:49.242926] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.280 [2024-12-13 05:51:49.254876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.280 [2024-12-13 05:51:49.255230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.280 [2024-12-13 05:51:49.255246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.280 [2024-12-13 05:51:49.255253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.280 [2024-12-13 05:51:49.255421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.280 [2024-12-13 05:51:49.255595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.281 [2024-12-13 05:51:49.255603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.281 [2024-12-13 05:51:49.255609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.281 [2024-12-13 05:51:49.255615] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.281 [2024-12-13 05:51:49.267863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.281 [2024-12-13 05:51:49.268317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.281 [2024-12-13 05:51:49.268333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.281 [2024-12-13 05:51:49.268340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.281 [2024-12-13 05:51:49.268519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.281 [2024-12-13 05:51:49.268692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.281 [2024-12-13 05:51:49.268700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.281 [2024-12-13 05:51:49.268706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.281 [2024-12-13 05:51:49.268713] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.550 [2024-12-13 05:51:49.280930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.550 [2024-12-13 05:51:49.281355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-12-13 05:51:49.281371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.550 [2024-12-13 05:51:49.281379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.550 [2024-12-13 05:51:49.281558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.550 [2024-12-13 05:51:49.281731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.550 [2024-12-13 05:51:49.281740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.550 [2024-12-13 05:51:49.281746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.550 [2024-12-13 05:51:49.281752] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.550 [2024-12-13 05:51:49.293887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.550 [2024-12-13 05:51:49.294306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-12-13 05:51:49.294322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.550 [2024-12-13 05:51:49.294332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.550 [2024-12-13 05:51:49.294511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.550 [2024-12-13 05:51:49.294693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.550 [2024-12-13 05:51:49.294701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.550 [2024-12-13 05:51:49.294707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.550 [2024-12-13 05:51:49.294713] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.550 [2024-12-13 05:51:49.306867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.550 [2024-12-13 05:51:49.307272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-12-13 05:51:49.307288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.550 [2024-12-13 05:51:49.307295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.550 [2024-12-13 05:51:49.307475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.550 [2024-12-13 05:51:49.307648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.550 [2024-12-13 05:51:49.307656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.550 [2024-12-13 05:51:49.307662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.550 [2024-12-13 05:51:49.307668] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.550 [2024-12-13 05:51:49.319901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.550 [2024-12-13 05:51:49.320324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-12-13 05:51:49.320341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.550 [2024-12-13 05:51:49.320348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.550 [2024-12-13 05:51:49.320529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.550 [2024-12-13 05:51:49.320702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.550 [2024-12-13 05:51:49.320710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.550 [2024-12-13 05:51:49.320716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.550 [2024-12-13 05:51:49.320723] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.550 [2024-12-13 05:51:49.332793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.550 [2024-12-13 05:51:49.333161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-12-13 05:51:49.333205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.550 [2024-12-13 05:51:49.333227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.550 [2024-12-13 05:51:49.333691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.550 [2024-12-13 05:51:49.333864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.550 [2024-12-13 05:51:49.333872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.550 [2024-12-13 05:51:49.333878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.550 [2024-12-13 05:51:49.333884] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.550 [2024-12-13 05:51:49.345663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.550 [2024-12-13 05:51:49.346022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-12-13 05:51:49.346037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.550 [2024-12-13 05:51:49.346044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.550 [2024-12-13 05:51:49.346212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.550 [2024-12-13 05:51:49.346379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.551 [2024-12-13 05:51:49.346387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.551 [2024-12-13 05:51:49.346393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.551 [2024-12-13 05:51:49.346399] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.551 [2024-12-13 05:51:49.358744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.551 [2024-12-13 05:51:49.359031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-12-13 05:51:49.359047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.551 [2024-12-13 05:51:49.359054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.551 [2024-12-13 05:51:49.359222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.551 [2024-12-13 05:51:49.359390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.551 [2024-12-13 05:51:49.359398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.551 [2024-12-13 05:51:49.359405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.551 [2024-12-13 05:51:49.359411] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.551 [2024-12-13 05:51:49.371484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.551 [2024-12-13 05:51:49.371825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-12-13 05:51:49.371842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.551 [2024-12-13 05:51:49.371849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.551 [2024-12-13 05:51:49.372016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.551 [2024-12-13 05:51:49.372184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.551 [2024-12-13 05:51:49.372192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.551 [2024-12-13 05:51:49.372201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.551 [2024-12-13 05:51:49.372207] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.551 [2024-12-13 05:51:49.384347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.551 [2024-12-13 05:51:49.384712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-12-13 05:51:49.384729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.551 [2024-12-13 05:51:49.384736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.551 [2024-12-13 05:51:49.384905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.551 [2024-12-13 05:51:49.385073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.551 [2024-12-13 05:51:49.385081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.551 [2024-12-13 05:51:49.385086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.551 [2024-12-13 05:51:49.385093] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.551 [2024-12-13 05:51:49.397108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.551 [2024-12-13 05:51:49.397587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-12-13 05:51:49.397634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.551 [2024-12-13 05:51:49.397657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.551 [2024-12-13 05:51:49.398240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.551 [2024-12-13 05:51:49.398521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.551 [2024-12-13 05:51:49.398529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.551 [2024-12-13 05:51:49.398535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.551 [2024-12-13 05:51:49.398541] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.551 [2024-12-13 05:51:49.409869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.551 [2024-12-13 05:51:49.410299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-12-13 05:51:49.410342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.551 [2024-12-13 05:51:49.410365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.551 [2024-12-13 05:51:49.410963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.551 [2024-12-13 05:51:49.411570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.551 [2024-12-13 05:51:49.411579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.551 [2024-12-13 05:51:49.411585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.551 [2024-12-13 05:51:49.411591] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.551 [2024-12-13 05:51:49.422723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.551 [2024-12-13 05:51:49.423141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-12-13 05:51:49.423157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.551 [2024-12-13 05:51:49.423165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.551 [2024-12-13 05:51:49.423333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.551 [2024-12-13 05:51:49.423506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.551 [2024-12-13 05:51:49.423515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.551 [2024-12-13 05:51:49.423523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.551 [2024-12-13 05:51:49.423530] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.551 [2024-12-13 05:51:49.435570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.551 [2024-12-13 05:51:49.435932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-12-13 05:51:49.435947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.551 [2024-12-13 05:51:49.435954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.551 [2024-12-13 05:51:49.436121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.551 [2024-12-13 05:51:49.436289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.551 [2024-12-13 05:51:49.436296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.551 [2024-12-13 05:51:49.436302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.551 [2024-12-13 05:51:49.436309] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.551 [2024-12-13 05:51:49.448407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.551 [2024-12-13 05:51:49.448780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-12-13 05:51:49.448796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.551 [2024-12-13 05:51:49.448803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.551 [2024-12-13 05:51:49.448961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.551 [2024-12-13 05:51:49.449119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.551 [2024-12-13 05:51:49.449127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.551 [2024-12-13 05:51:49.449134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.551 [2024-12-13 05:51:49.449139] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.551 [2024-12-13 05:51:49.461152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.551 [2024-12-13 05:51:49.461529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-12-13 05:51:49.461575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.551 [2024-12-13 05:51:49.461607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.551 [2024-12-13 05:51:49.462191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.551 [2024-12-13 05:51:49.462561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.551 [2024-12-13 05:51:49.462569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.551 [2024-12-13 05:51:49.462575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.551 [2024-12-13 05:51:49.462581] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.551 [2024-12-13 05:51:49.474027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.551 [2024-12-13 05:51:49.474470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-12-13 05:51:49.474515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.551 [2024-12-13 05:51:49.474538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.551 [2024-12-13 05:51:49.475035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.551 [2024-12-13 05:51:49.475203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.551 [2024-12-13 05:51:49.475211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.551 [2024-12-13 05:51:49.475216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.551 [2024-12-13 05:51:49.475222] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.551 [2024-12-13 05:51:49.486822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.551 [2024-12-13 05:51:49.487272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-12-13 05:51:49.487317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.551 [2024-12-13 05:51:49.487339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.551 [2024-12-13 05:51:49.487785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.551 [2024-12-13 05:51:49.487954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.551 [2024-12-13 05:51:49.487962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.551 [2024-12-13 05:51:49.487968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.551 [2024-12-13 05:51:49.487974] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.551 [2024-12-13 05:51:49.499653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.551 [2024-12-13 05:51:49.499983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-12-13 05:51:49.499999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.551 [2024-12-13 05:51:49.500006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.551 [2024-12-13 05:51:49.500174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.551 [2024-12-13 05:51:49.500344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.551 [2024-12-13 05:51:49.500352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.551 [2024-12-13 05:51:49.500358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.551 [2024-12-13 05:51:49.500364] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.551 [2024-12-13 05:51:49.512650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.551 [2024-12-13 05:51:49.512978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-12-13 05:51:49.512994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.551 [2024-12-13 05:51:49.513001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.552 [2024-12-13 05:51:49.513169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.552 [2024-12-13 05:51:49.513336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.552 [2024-12-13 05:51:49.513344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.552 [2024-12-13 05:51:49.513350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.552 [2024-12-13 05:51:49.513356] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.552 [2024-12-13 05:51:49.525572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.552 [2024-12-13 05:51:49.526006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-12-13 05:51:49.526050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.552 [2024-12-13 05:51:49.526073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.552 [2024-12-13 05:51:49.526670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.552 [2024-12-13 05:51:49.526884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.552 [2024-12-13 05:51:49.526891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.552 [2024-12-13 05:51:49.526897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.552 [2024-12-13 05:51:49.526903] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.552 [2024-12-13 05:51:49.538416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.552 [2024-12-13 05:51:49.538782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-12-13 05:51:49.538798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.552 [2024-12-13 05:51:49.538805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.552 [2024-12-13 05:51:49.538973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.552 [2024-12-13 05:51:49.539140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.552 [2024-12-13 05:51:49.539148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.552 [2024-12-13 05:51:49.539157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.552 [2024-12-13 05:51:49.539163] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.552 [2024-12-13 05:51:49.551210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.552 [2024-12-13 05:51:49.551630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-12-13 05:51:49.551645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.552 [2024-12-13 05:51:49.551652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.552 [2024-12-13 05:51:49.551810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.552 [2024-12-13 05:51:49.551969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.552 [2024-12-13 05:51:49.551976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.552 [2024-12-13 05:51:49.551982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.552 [2024-12-13 05:51:49.551987] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.839 [2024-12-13 05:51:49.564172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.839 [2024-12-13 05:51:49.564610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-12-13 05:51:49.564628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.839 [2024-12-13 05:51:49.564635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.839 [2024-12-13 05:51:49.564807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.839 [2024-12-13 05:51:49.564980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.839 [2024-12-13 05:51:49.564988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.839 [2024-12-13 05:51:49.564994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.839 [2024-12-13 05:51:49.565000] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.839 [2024-12-13 05:51:49.577119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.839 [2024-12-13 05:51:49.577542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-12-13 05:51:49.577560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.839 [2024-12-13 05:51:49.577567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.839 [2024-12-13 05:51:49.577739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.839 [2024-12-13 05:51:49.577912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.839 [2024-12-13 05:51:49.577920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.839 [2024-12-13 05:51:49.577926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.839 [2024-12-13 05:51:49.577933] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.839 [2024-12-13 05:51:49.590004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.839 [2024-12-13 05:51:49.590409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-12-13 05:51:49.590425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.839 [2024-12-13 05:51:49.590432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.839 [2024-12-13 05:51:49.590605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.839 [2024-12-13 05:51:49.590774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.839 [2024-12-13 05:51:49.590781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.839 [2024-12-13 05:51:49.590787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.839 [2024-12-13 05:51:49.590793] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.839 [2024-12-13 05:51:49.602868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.839 [2024-12-13 05:51:49.603282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-12-13 05:51:49.603298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.839 [2024-12-13 05:51:49.603304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.839 [2024-12-13 05:51:49.603468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.839 [2024-12-13 05:51:49.603652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.839 [2024-12-13 05:51:49.603660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.839 [2024-12-13 05:51:49.603666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.839 [2024-12-13 05:51:49.603672] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.839 [2024-12-13 05:51:49.615702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.839 [2024-12-13 05:51:49.616064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-12-13 05:51:49.616080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.839 [2024-12-13 05:51:49.616087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.839 [2024-12-13 05:51:49.616255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.839 [2024-12-13 05:51:49.616422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.839 [2024-12-13 05:51:49.616430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.839 [2024-12-13 05:51:49.616436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.839 [2024-12-13 05:51:49.616442] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.839 [2024-12-13 05:51:49.628525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.839 [2024-12-13 05:51:49.628930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-12-13 05:51:49.628945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.839 [2024-12-13 05:51:49.628955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.839 [2024-12-13 05:51:49.629113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.839 [2024-12-13 05:51:49.629271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.839 [2024-12-13 05:51:49.629279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.839 [2024-12-13 05:51:49.629284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.839 [2024-12-13 05:51:49.629290] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.839 5755.60 IOPS, 22.48 MiB/s [2024-12-13T04:51:49.854Z] [2024-12-13 05:51:49.642497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.839 [2024-12-13 05:51:49.642911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-12-13 05:51:49.642926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.840 [2024-12-13 05:51:49.642933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.840 [2024-12-13 05:51:49.643092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.840 [2024-12-13 05:51:49.643251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.840 [2024-12-13 05:51:49.643258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.840 [2024-12-13 05:51:49.643264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.840 [2024-12-13 05:51:49.643269] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.840 [2024-12-13 05:51:49.655460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.840 [2024-12-13 05:51:49.655890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-12-13 05:51:49.655906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.840 [2024-12-13 05:51:49.655913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.840 [2024-12-13 05:51:49.656081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.840 [2024-12-13 05:51:49.656248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.840 [2024-12-13 05:51:49.656256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.840 [2024-12-13 05:51:49.656262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.840 [2024-12-13 05:51:49.656268] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.840 [2024-12-13 05:51:49.668201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.840 [2024-12-13 05:51:49.668643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-12-13 05:51:49.668659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.840 [2024-12-13 05:51:49.668665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.840 [2024-12-13 05:51:49.668824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.840 [2024-12-13 05:51:49.668987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.840 [2024-12-13 05:51:49.668995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.840 [2024-12-13 05:51:49.669000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.840 [2024-12-13 05:51:49.669006] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.840 [2024-12-13 05:51:49.681068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.840 [2024-12-13 05:51:49.681456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-12-13 05:51:49.681473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.840 [2024-12-13 05:51:49.681479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.840 [2024-12-13 05:51:49.681646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.840 [2024-12-13 05:51:49.681814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.840 [2024-12-13 05:51:49.681822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.840 [2024-12-13 05:51:49.681828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.840 [2024-12-13 05:51:49.681834] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.840 [2024-12-13 05:51:49.693885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.840 [2024-12-13 05:51:49.694227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-12-13 05:51:49.694243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.840 [2024-12-13 05:51:49.694251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.840 [2024-12-13 05:51:49.694418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.840 [2024-12-13 05:51:49.694591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.840 [2024-12-13 05:51:49.694599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.840 [2024-12-13 05:51:49.694605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.840 [2024-12-13 05:51:49.694612] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.840 [2024-12-13 05:51:49.706667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.840 [2024-12-13 05:51:49.707095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-12-13 05:51:49.707139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.840 [2024-12-13 05:51:49.707162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.840 [2024-12-13 05:51:49.707756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.840 [2024-12-13 05:51:49.708327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.840 [2024-12-13 05:51:49.708335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.840 [2024-12-13 05:51:49.708344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.840 [2024-12-13 05:51:49.708351] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.840 [2024-12-13 05:51:49.719501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.840 [2024-12-13 05:51:49.719897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-12-13 05:51:49.719913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.840 [2024-12-13 05:51:49.719921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.840 [2024-12-13 05:51:49.720088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.840 [2024-12-13 05:51:49.720256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.840 [2024-12-13 05:51:49.720264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.840 [2024-12-13 05:51:49.720271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.840 [2024-12-13 05:51:49.720277] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.840 [2024-12-13 05:51:49.732261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.840 [2024-12-13 05:51:49.732657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-12-13 05:51:49.732673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.840 [2024-12-13 05:51:49.732681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.840 [2024-12-13 05:51:49.732847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.840 [2024-12-13 05:51:49.733015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.840 [2024-12-13 05:51:49.733023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.840 [2024-12-13 05:51:49.733029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.840 [2024-12-13 05:51:49.733035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.840 [2024-12-13 05:51:49.745080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.840 [2024-12-13 05:51:49.745472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-12-13 05:51:49.745489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.840 [2024-12-13 05:51:49.745495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.840 [2024-12-13 05:51:49.745654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.840 [2024-12-13 05:51:49.745812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.840 [2024-12-13 05:51:49.745820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.840 [2024-12-13 05:51:49.745825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.840 [2024-12-13 05:51:49.745831] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.840 [2024-12-13 05:51:49.757901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.840 [2024-12-13 05:51:49.758278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-12-13 05:51:49.758294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.840 [2024-12-13 05:51:49.758301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.840 [2024-12-13 05:51:49.758474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.840 [2024-12-13 05:51:49.758642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.840 [2024-12-13 05:51:49.758650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.840 [2024-12-13 05:51:49.758656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.840 [2024-12-13 05:51:49.758662] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.840 [2024-12-13 05:51:49.770812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.840 [2024-12-13 05:51:49.771257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-12-13 05:51:49.771273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.840 [2024-12-13 05:51:49.771280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.840 [2024-12-13 05:51:49.771457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.841 [2024-12-13 05:51:49.771630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.841 [2024-12-13 05:51:49.771638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.841 [2024-12-13 05:51:49.771645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.841 [2024-12-13 05:51:49.771651] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.841 [2024-12-13 05:51:49.783827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.841 [2024-12-13 05:51:49.784158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-12-13 05:51:49.784174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.841 [2024-12-13 05:51:49.784181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.841 [2024-12-13 05:51:49.784348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.841 [2024-12-13 05:51:49.784524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.841 [2024-12-13 05:51:49.784532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.841 [2024-12-13 05:51:49.784538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.841 [2024-12-13 05:51:49.784544] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.841 [2024-12-13 05:51:49.796744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.841 [2024-12-13 05:51:49.797190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-12-13 05:51:49.797233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.841 [2024-12-13 05:51:49.797264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.841 [2024-12-13 05:51:49.797688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.841 [2024-12-13 05:51:49.797858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.841 [2024-12-13 05:51:49.797866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.841 [2024-12-13 05:51:49.797872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.841 [2024-12-13 05:51:49.797878] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.841 [2024-12-13 05:51:49.809577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.841 [2024-12-13 05:51:49.809923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-12-13 05:51:49.809939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.841 [2024-12-13 05:51:49.809946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.841 [2024-12-13 05:51:49.810114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.841 [2024-12-13 05:51:49.810281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.841 [2024-12-13 05:51:49.810289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.841 [2024-12-13 05:51:49.810295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.841 [2024-12-13 05:51:49.810301] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.841 [2024-12-13 05:51:49.822347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.841 [2024-12-13 05:51:49.822767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-12-13 05:51:49.822783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.841 [2024-12-13 05:51:49.822790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.841 [2024-12-13 05:51:49.822957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.841 [2024-12-13 05:51:49.823125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.841 [2024-12-13 05:51:49.823132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.841 [2024-12-13 05:51:49.823138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.841 [2024-12-13 05:51:49.823144] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.841 [2024-12-13 05:51:49.835361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.841 [2024-12-13 05:51:49.835821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-12-13 05:51:49.835838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:49.841 [2024-12-13 05:51:49.835845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:49.841 [2024-12-13 05:51:49.836018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:49.841 [2024-12-13 05:51:49.836194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.841 [2024-12-13 05:51:49.836202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.841 [2024-12-13 05:51:49.836208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.841 [2024-12-13 05:51:49.836215] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.135 [2024-12-13 05:51:49.848422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.135 [2024-12-13 05:51:49.848860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-12-13 05:51:49.848876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.135 [2024-12-13 05:51:49.848884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.135 [2024-12-13 05:51:49.849056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.135 [2024-12-13 05:51:49.849229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.135 [2024-12-13 05:51:49.849238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.135 [2024-12-13 05:51:49.849244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.135 [2024-12-13 05:51:49.849250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.135 [2024-12-13 05:51:49.861529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.135 [2024-12-13 05:51:49.861959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-12-13 05:51:49.861975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.135 [2024-12-13 05:51:49.861982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.135 [2024-12-13 05:51:49.862154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.135 [2024-12-13 05:51:49.862326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.135 [2024-12-13 05:51:49.862334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.135 [2024-12-13 05:51:49.862340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.135 [2024-12-13 05:51:49.862347] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.135 [2024-12-13 05:51:49.874560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.135 [2024-12-13 05:51:49.874968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-12-13 05:51:49.874991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.135 [2024-12-13 05:51:49.874999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.135 [2024-12-13 05:51:49.875171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.135 [2024-12-13 05:51:49.875343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.135 [2024-12-13 05:51:49.875352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.135 [2024-12-13 05:51:49.875361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.135 [2024-12-13 05:51:49.875368] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.135 [2024-12-13 05:51:49.887450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.135 [2024-12-13 05:51:49.887872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-12-13 05:51:49.887887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.135 [2024-12-13 05:51:49.887894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.135 [2024-12-13 05:51:49.888062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.135 [2024-12-13 05:51:49.888229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.135 [2024-12-13 05:51:49.888237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.135 [2024-12-13 05:51:49.888243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.135 [2024-12-13 05:51:49.888248] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.135 [2024-12-13 05:51:49.900244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.135 [2024-12-13 05:51:49.900665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-12-13 05:51:49.900712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.135 [2024-12-13 05:51:49.900734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.135 [2024-12-13 05:51:49.901317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.135 [2024-12-13 05:51:49.901847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.135 [2024-12-13 05:51:49.901855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.135 [2024-12-13 05:51:49.901861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.135 [2024-12-13 05:51:49.901867] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.135 [2024-12-13 05:51:49.913006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.135 [2024-12-13 05:51:49.913453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-12-13 05:51:49.913470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.135 [2024-12-13 05:51:49.913477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.135 [2024-12-13 05:51:49.913645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.135 [2024-12-13 05:51:49.913812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.135 [2024-12-13 05:51:49.913820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.135 [2024-12-13 05:51:49.913826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.135 [2024-12-13 05:51:49.913832] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.135 [2024-12-13 05:51:49.925783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.135 [2024-12-13 05:51:49.926154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-12-13 05:51:49.926169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.135 [2024-12-13 05:51:49.926176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.135 [2024-12-13 05:51:49.926343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.135 [2024-12-13 05:51:49.926543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.135 [2024-12-13 05:51:49.926552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.135 [2024-12-13 05:51:49.926558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.135 [2024-12-13 05:51:49.926564] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.135 [2024-12-13 05:51:49.938681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.135 [2024-12-13 05:51:49.939097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-12-13 05:51:49.939115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.135 [2024-12-13 05:51:49.939122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.135 [2024-12-13 05:51:49.939314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.135 [2024-12-13 05:51:49.939494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.135 [2024-12-13 05:51:49.939503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.135 [2024-12-13 05:51:49.939509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.135 [2024-12-13 05:51:49.939516] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.136 [2024-12-13 05:51:49.951481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.136 [2024-12-13 05:51:49.951870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-12-13 05:51:49.951885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.136 [2024-12-13 05:51:49.951892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.136 [2024-12-13 05:51:49.952051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.136 [2024-12-13 05:51:49.952210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.136 [2024-12-13 05:51:49.952217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.136 [2024-12-13 05:51:49.952223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.136 [2024-12-13 05:51:49.952229] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.136 [2024-12-13 05:51:49.964331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.136 [2024-12-13 05:51:49.964768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-12-13 05:51:49.964784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.136 [2024-12-13 05:51:49.964794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.136 [2024-12-13 05:51:49.964962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.136 [2024-12-13 05:51:49.965130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.136 [2024-12-13 05:51:49.965137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.136 [2024-12-13 05:51:49.965143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.136 [2024-12-13 05:51:49.965149] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.136 [2024-12-13 05:51:49.977193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.136 [2024-12-13 05:51:49.977612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-12-13 05:51:49.977628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.136 [2024-12-13 05:51:49.977634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.136 [2024-12-13 05:51:49.977793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.136 [2024-12-13 05:51:49.977952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.136 [2024-12-13 05:51:49.977959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.136 [2024-12-13 05:51:49.977965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.136 [2024-12-13 05:51:49.977971] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.136 [2024-12-13 05:51:49.990002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.136 [2024-12-13 05:51:49.990414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-12-13 05:51:49.990429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.136 [2024-12-13 05:51:49.990435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.136 [2024-12-13 05:51:49.990622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.136 [2024-12-13 05:51:49.990790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.136 [2024-12-13 05:51:49.990797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.136 [2024-12-13 05:51:49.990803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.136 [2024-12-13 05:51:49.990810] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.136 [2024-12-13 05:51:50.002997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.136 [2024-12-13 05:51:50.003397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-12-13 05:51:50.003413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.136 [2024-12-13 05:51:50.003420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.136 [2024-12-13 05:51:50.003599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.136 [2024-12-13 05:51:50.003775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.136 [2024-12-13 05:51:50.003783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.136 [2024-12-13 05:51:50.003789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.136 [2024-12-13 05:51:50.003796] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.136 [2024-12-13 05:51:50.016017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.136 [2024-12-13 05:51:50.016402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-12-13 05:51:50.016420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.136 [2024-12-13 05:51:50.016429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.136 [2024-12-13 05:51:50.016629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.136 [2024-12-13 05:51:50.016844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.136 [2024-12-13 05:51:50.016896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.136 [2024-12-13 05:51:50.016903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.136 [2024-12-13 05:51:50.016911] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.136 [2024-12-13 05:51:50.029072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.136 [2024-12-13 05:51:50.029430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-12-13 05:51:50.029446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.136 [2024-12-13 05:51:50.029459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.136 [2024-12-13 05:51:50.029632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.136 [2024-12-13 05:51:50.029805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.136 [2024-12-13 05:51:50.029813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.136 [2024-12-13 05:51:50.029819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.136 [2024-12-13 05:51:50.029825] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.136 [2024-12-13 05:51:50.042195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.136 [2024-12-13 05:51:50.042521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-12-13 05:51:50.042537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.136 [2024-12-13 05:51:50.042545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.136 [2024-12-13 05:51:50.042718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.136 [2024-12-13 05:51:50.042890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.136 [2024-12-13 05:51:50.042898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.136 [2024-12-13 05:51:50.042908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.136 [2024-12-13 05:51:50.042915] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.136 [2024-12-13 05:51:50.055171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.136 [2024-12-13 05:51:50.055599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-12-13 05:51:50.055615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.136 [2024-12-13 05:51:50.055622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.136 [2024-12-13 05:51:50.055790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.136 [2024-12-13 05:51:50.055975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.136 [2024-12-13 05:51:50.055983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.136 [2024-12-13 05:51:50.055989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.136 [2024-12-13 05:51:50.055995] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.136 [2024-12-13 05:51:50.068208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.136 [2024-12-13 05:51:50.068635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-12-13 05:51:50.068652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.136 [2024-12-13 05:51:50.068659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.136 [2024-12-13 05:51:50.068827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.136 [2024-12-13 05:51:50.068995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.136 [2024-12-13 05:51:50.069003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.136 [2024-12-13 05:51:50.069009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.136 [2024-12-13 05:51:50.069015] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.137 [2024-12-13 05:51:50.081223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.137 [2024-12-13 05:51:50.081545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-12-13 05:51:50.081562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.137 [2024-12-13 05:51:50.081569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.137 [2024-12-13 05:51:50.081742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.137 [2024-12-13 05:51:50.081919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.137 [2024-12-13 05:51:50.081928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.137 [2024-12-13 05:51:50.081934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.137 [2024-12-13 05:51:50.081940] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.137 [2024-12-13 05:51:50.094276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.137 [2024-12-13 05:51:50.094653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-12-13 05:51:50.094669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.137 [2024-12-13 05:51:50.094676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.137 [2024-12-13 05:51:50.094848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.137 [2024-12-13 05:51:50.095020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.137 [2024-12-13 05:51:50.095028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.137 [2024-12-13 05:51:50.095034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.137 [2024-12-13 05:51:50.095041] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.137 [2024-12-13 05:51:50.107281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.137 [2024-12-13 05:51:50.107655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-12-13 05:51:50.107671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.137 [2024-12-13 05:51:50.107678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.137 [2024-12-13 05:51:50.107850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.137 [2024-12-13 05:51:50.108022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.137 [2024-12-13 05:51:50.108030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.137 [2024-12-13 05:51:50.108037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.137 [2024-12-13 05:51:50.108043] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.137 [2024-12-13 05:51:50.120274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.137 [2024-12-13 05:51:50.120692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-12-13 05:51:50.120709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.137 [2024-12-13 05:51:50.120716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.137 [2024-12-13 05:51:50.120888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.137 [2024-12-13 05:51:50.121061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.137 [2024-12-13 05:51:50.121069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.137 [2024-12-13 05:51:50.121075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.137 [2024-12-13 05:51:50.121081] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.401 [2024-12-13 05:51:50.133295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.401 [2024-12-13 05:51:50.133726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-13 05:51:50.133742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.401 [2024-12-13 05:51:50.133753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.401 [2024-12-13 05:51:50.133926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.401 [2024-12-13 05:51:50.134098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.401 [2024-12-13 05:51:50.134106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.401 [2024-12-13 05:51:50.134112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.401 [2024-12-13 05:51:50.134119] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.401 [2024-12-13 05:51:50.146340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.401 [2024-12-13 05:51:50.146767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-13 05:51:50.146784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.401 [2024-12-13 05:51:50.146791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.401 [2024-12-13 05:51:50.146963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.401 [2024-12-13 05:51:50.147136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.401 [2024-12-13 05:51:50.147144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.401 [2024-12-13 05:51:50.147150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.401 [2024-12-13 05:51:50.147156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.401 [2024-12-13 05:51:50.159396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.401 [2024-12-13 05:51:50.159743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-13 05:51:50.159759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.401 [2024-12-13 05:51:50.159766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.401 [2024-12-13 05:51:50.159939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.401 [2024-12-13 05:51:50.160116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.401 [2024-12-13 05:51:50.160124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.401 [2024-12-13 05:51:50.160130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.401 [2024-12-13 05:51:50.160136] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.401 [2024-12-13 05:51:50.172512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.401 [2024-12-13 05:51:50.172915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-13 05:51:50.172931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.401 [2024-12-13 05:51:50.172938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.401 [2024-12-13 05:51:50.173110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.401 [2024-12-13 05:51:50.173286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.401 [2024-12-13 05:51:50.173294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.401 [2024-12-13 05:51:50.173300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.401 [2024-12-13 05:51:50.173306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 535717 Killed "${NVMF_APP[@]}" "$@" 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=536900 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 536900 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 536900 ']' 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:50.401 [2024-12-13 05:51:50.185510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:50.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:50.401 [2024-12-13 05:51:50.185918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-13 05:51:50.185937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.401 [2024-12-13 05:51:50.185945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:50.401 [2024-12-13 05:51:50.186120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.401 [2024-12-13 05:51:50.186297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.401 [2024-12-13 05:51:50.186307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.401 [2024-12-13 05:51:50.186313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.401 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:50.401 [2024-12-13 05:51:50.186319] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.401 [2024-12-13 05:51:50.198528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.401 [2024-12-13 05:51:50.198927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-13 05:51:50.198942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.401 [2024-12-13 05:51:50.198951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.401 [2024-12-13 05:51:50.199128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.401 [2024-12-13 05:51:50.199301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.401 [2024-12-13 05:51:50.199309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.401 [2024-12-13 05:51:50.199315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.401 [2024-12-13 05:51:50.199321] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.401 [2024-12-13 05:51:50.211470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.401 [2024-12-13 05:51:50.211889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-13 05:51:50.211905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.401 [2024-12-13 05:51:50.211913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.401 [2024-12-13 05:51:50.212085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.401 [2024-12-13 05:51:50.212258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.401 [2024-12-13 05:51:50.212266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.401 [2024-12-13 05:51:50.212272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.401 [2024-12-13 05:51:50.212278] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.401 [2024-12-13 05:51:50.224393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.401 [2024-12-13 05:51:50.224812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-13 05:51:50.224828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.401 [2024-12-13 05:51:50.224835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.401 [2024-12-13 05:51:50.225008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.401 [2024-12-13 05:51:50.225185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.401 [2024-12-13 05:51:50.225193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.401 [2024-12-13 05:51:50.225200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.401 [2024-12-13 05:51:50.225206] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.402 [2024-12-13 05:51:50.229417] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:50.402 [2024-12-13 05:51:50.229469] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:50.402 [2024-12-13 05:51:50.237385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.402 [2024-12-13 05:51:50.237795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-13 05:51:50.237812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.402 [2024-12-13 05:51:50.237819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.402 [2024-12-13 05:51:50.237996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.402 [2024-12-13 05:51:50.238170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.402 [2024-12-13 05:51:50.238178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.402 [2024-12-13 05:51:50.238185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.402 [2024-12-13 05:51:50.238191] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.402 [2024-12-13 05:51:50.250405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.402 [2024-12-13 05:51:50.250859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-13 05:51:50.250875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.402 [2024-12-13 05:51:50.250883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.402 [2024-12-13 05:51:50.251063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.402 [2024-12-13 05:51:50.251231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.402 [2024-12-13 05:51:50.251239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.402 [2024-12-13 05:51:50.251247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.402 [2024-12-13 05:51:50.251253] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.402 [2024-12-13 05:51:50.263451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.402 [2024-12-13 05:51:50.263774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-13 05:51:50.263790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.402 [2024-12-13 05:51:50.263798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.402 [2024-12-13 05:51:50.263970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.402 [2024-12-13 05:51:50.264143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.402 [2024-12-13 05:51:50.264151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.402 [2024-12-13 05:51:50.264157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.402 [2024-12-13 05:51:50.264164] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.402 [2024-12-13 05:51:50.276533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.402 [2024-12-13 05:51:50.276902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-13 05:51:50.276919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.402 [2024-12-13 05:51:50.276926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.402 [2024-12-13 05:51:50.277098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.402 [2024-12-13 05:51:50.277271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.402 [2024-12-13 05:51:50.277285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.402 [2024-12-13 05:51:50.277292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.402 [2024-12-13 05:51:50.277299] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.402 [2024-12-13 05:51:50.289574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.402 [2024-12-13 05:51:50.289983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-13 05:51:50.289999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.402 [2024-12-13 05:51:50.290006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.402 [2024-12-13 05:51:50.290178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.402 [2024-12-13 05:51:50.290351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.402 [2024-12-13 05:51:50.290359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.402 [2024-12-13 05:51:50.290365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.402 [2024-12-13 05:51:50.290372] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.402 [2024-12-13 05:51:50.302574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.402 [2024-12-13 05:51:50.303008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-13 05:51:50.303024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.402 [2024-12-13 05:51:50.303032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.402 [2024-12-13 05:51:50.303204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.402 [2024-12-13 05:51:50.303378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.402 [2024-12-13 05:51:50.303386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.402 [2024-12-13 05:51:50.303392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.402 [2024-12-13 05:51:50.303398] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.402 [2024-12-13 05:51:50.306851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:50.402 [2024-12-13 05:51:50.315616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.402 [2024-12-13 05:51:50.315993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-13 05:51:50.316012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.402 [2024-12-13 05:51:50.316019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.402 [2024-12-13 05:51:50.316191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.402 [2024-12-13 05:51:50.316365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.402 [2024-12-13 05:51:50.316374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.402 [2024-12-13 05:51:50.316385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.402 [2024-12-13 05:51:50.316393] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.402 [2024-12-13 05:51:50.328380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:50.402 [2024-12-13 05:51:50.328410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:50.402 [2024-12-13 05:51:50.328417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:50.402 [2024-12-13 05:51:50.328423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:50.402 [2024-12-13 05:51:50.328427] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:50.402 [2024-12-13 05:51:50.328626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.402 [2024-12-13 05:51:50.329048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-13 05:51:50.329067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.402 [2024-12-13 05:51:50.329075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.402 [2024-12-13 05:51:50.329249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.402 [2024-12-13 05:51:50.329425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.402 [2024-12-13 05:51:50.329433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.402 [2024-12-13 05:51:50.329440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.402 [2024-12-13 05:51:50.329446] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.402 [2024-12-13 05:51:50.329696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:35:50.402 [2024-12-13 05:51:50.329804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:50.402 [2024-12-13 05:51:50.329804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:35:50.402 [2024-12-13 05:51:50.341684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.402 [2024-12-13 05:51:50.342124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-13 05:51:50.342144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.402 [2024-12-13 05:51:50.342154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.402 [2024-12-13 05:51:50.342329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.402 [2024-12-13 05:51:50.342508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.402 [2024-12-13 05:51:50.342519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.402 [2024-12-13 05:51:50.342526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.403 [2024-12-13 05:51:50.342533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.403 [2024-12-13 05:51:50.354973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.403 [2024-12-13 05:51:50.355373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-13 05:51:50.355393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.403 [2024-12-13 05:51:50.355403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.403 [2024-12-13 05:51:50.355590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.403 [2024-12-13 05:51:50.355766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.403 [2024-12-13 05:51:50.355776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.403 [2024-12-13 05:51:50.355783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.403 [2024-12-13 05:51:50.355791] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.403 [2024-12-13 05:51:50.368012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.403 [2024-12-13 05:51:50.368456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-13 05:51:50.368477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.403 [2024-12-13 05:51:50.368485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.403 [2024-12-13 05:51:50.368659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.403 [2024-12-13 05:51:50.368832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.403 [2024-12-13 05:51:50.368840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.403 [2024-12-13 05:51:50.368847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.403 [2024-12-13 05:51:50.368854] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.403 [2024-12-13 05:51:50.381076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.403 [2024-12-13 05:51:50.381426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-13 05:51:50.381445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.403 [2024-12-13 05:51:50.381459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.403 [2024-12-13 05:51:50.381633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.403 [2024-12-13 05:51:50.381807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.403 [2024-12-13 05:51:50.381815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.403 [2024-12-13 05:51:50.381822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.403 [2024-12-13 05:51:50.381829] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.403 [2024-12-13 05:51:50.394050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.403 [2024-12-13 05:51:50.394495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-13 05:51:50.394515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.403 [2024-12-13 05:51:50.394523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.403 [2024-12-13 05:51:50.394696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.403 [2024-12-13 05:51:50.394870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.403 [2024-12-13 05:51:50.394883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.403 [2024-12-13 05:51:50.394889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.403 [2024-12-13 05:51:50.394896] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.403 [2024-12-13 05:51:50.407142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.403 [2024-12-13 05:51:50.407579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-13 05:51:50.407595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.403 [2024-12-13 05:51:50.407603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.403 [2024-12-13 05:51:50.407776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.403 [2024-12-13 05:51:50.407948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.403 [2024-12-13 05:51:50.407956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.403 [2024-12-13 05:51:50.407963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.403 [2024-12-13 05:51:50.407969] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.662 [2024-12-13 05:51:50.420195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.662 [2024-12-13 05:51:50.420625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-12-13 05:51:50.420642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.662 [2024-12-13 05:51:50.420649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.662 [2024-12-13 05:51:50.420822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.662 [2024-12-13 05:51:50.420994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.662 [2024-12-13 05:51:50.421002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.662 [2024-12-13 05:51:50.421008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.662 [2024-12-13 05:51:50.421014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 [2024-12-13 05:51:50.433228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.662 [2024-12-13 05:51:50.433707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-12-13 05:51:50.433724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.662 [2024-12-13 05:51:50.433731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.662 [2024-12-13 05:51:50.433904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.662 [2024-12-13 05:51:50.434080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.662 [2024-12-13 05:51:50.434089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.662 [2024-12-13 05:51:50.434095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.662 [2024-12-13 05:51:50.434102] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.662 [2024-12-13 05:51:50.446327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.662 [2024-12-13 05:51:50.446770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-12-13 05:51:50.446787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.662 [2024-12-13 05:51:50.446794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.662 [2024-12-13 05:51:50.446979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.662 [2024-12-13 05:51:50.447153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.662 [2024-12-13 05:51:50.447162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.662 [2024-12-13 05:51:50.447168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.662 [2024-12-13 05:51:50.447174] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.662 [2024-12-13 05:51:50.459391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.662 [2024-12-13 05:51:50.459693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-12-13 05:51:50.459710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.662 [2024-12-13 05:51:50.459718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.662 [2024-12-13 05:51:50.459891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.662 [2024-12-13 05:51:50.460063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.662 [2024-12-13 05:51:50.460071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.662 [2024-12-13 05:51:50.460077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.662 [2024-12-13 05:51:50.460083] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 [2024-12-13 05:51:50.468323] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:50.662 [2024-12-13 05:51:50.472664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.662 [2024-12-13 05:51:50.473038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-12-13 05:51:50.473055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.662 [2024-12-13 05:51:50.473066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.662 [2024-12-13 05:51:50.473239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.662 Bad file descriptor 00:35:50.662 [2024-12-13 05:51:50.473421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.662 [2024-12-13 05:51:50.473429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.662 [2024-12-13 05:51:50.473436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.662 [2024-12-13 05:51:50.473442] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 [2024-12-13 05:51:50.485676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.662 [2024-12-13 05:51:50.486083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-12-13 05:51:50.486100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.662 [2024-12-13 05:51:50.486107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.662 [2024-12-13 05:51:50.486280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.662 [2024-12-13 05:51:50.486458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.662 [2024-12-13 05:51:50.486467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.662 [2024-12-13 05:51:50.486474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.662 [2024-12-13 05:51:50.486480] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.662 [2024-12-13 05:51:50.498689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.662 [2024-12-13 05:51:50.499099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-12-13 05:51:50.499116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.662 [2024-12-13 05:51:50.499123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.662 [2024-12-13 05:51:50.499296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.662 [2024-12-13 05:51:50.499478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.662 [2024-12-13 05:51:50.499487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.662 [2024-12-13 05:51:50.499493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.662 [2024-12-13 05:51:50.499499] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.662 Malloc0 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 [2024-12-13 05:51:50.511719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.662 [2024-12-13 05:51:50.512020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-12-13 05:51:50.512036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.662 [2024-12-13 05:51:50.512043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.662 [2024-12-13 05:51:50.512216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.662 [2024-12-13 05:51:50.512388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.662 [2024-12-13 05:51:50.512396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.662 [2024-12-13 05:51:50.512403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.662 [2024-12-13 05:51:50.512408] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.662 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.663 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:50.663 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.663 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:50.663 [2024-12-13 05:51:50.524803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.663 [2024-12-13 05:51:50.525202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.663 [2024-12-13 05:51:50.525219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182bcf0 with addr=10.0.0.2, port=4420 00:35:50.663 [2024-12-13 05:51:50.525226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bcf0 is same with the state(6) to be set 00:35:50.663 [2024-12-13 05:51:50.525398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182bcf0 (9): Bad file descriptor 00:35:50.663 [2024-12-13 05:51:50.525576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.663 [2024-12-13 05:51:50.525585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.663 [2024-12-13 05:51:50.525591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.663 [2024-12-13 05:51:50.525597] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.663 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.663 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:50.663 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.663 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:50.663 [2024-12-13 05:51:50.532294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:50.663 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.663 05:51:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 535974 00:35:50.663 [2024-12-13 05:51:50.537817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.663 [2024-12-13 05:51:50.568971] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:52.037 4931.83 IOPS, 19.26 MiB/s [2024-12-13T04:51:52.986Z] 5878.57 IOPS, 22.96 MiB/s [2024-12-13T04:51:53.921Z] 6564.12 IOPS, 25.64 MiB/s [2024-12-13T04:51:54.855Z] 7091.11 IOPS, 27.70 MiB/s [2024-12-13T04:51:55.788Z] 7504.40 IOPS, 29.31 MiB/s [2024-12-13T04:51:56.722Z] 7864.73 IOPS, 30.72 MiB/s [2024-12-13T04:51:58.097Z] 8143.67 IOPS, 31.81 MiB/s [2024-12-13T04:51:59.031Z] 8401.15 IOPS, 32.82 MiB/s [2024-12-13T04:51:59.966Z] 8620.86 IOPS, 33.68 MiB/s [2024-12-13T04:51:59.966Z] 8810.93 IOPS, 34.42 MiB/s 00:35:59.951 Latency(us) 00:35:59.951 [2024-12-13T04:51:59.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.951 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:59.951 Verification LBA range: start 0x0 length 0x4000 00:35:59.951 Nvme1n1 : 15.05 8792.23 34.34 11068.72 0.00 6408.25 433.01 40694.74 00:35:59.951 [2024-12-13T04:51:59.966Z] =================================================================================================================== 00:35:59.951 [2024-12-13T04:51:59.966Z] Total : 8792.23 34.34 11068.72 0.00 6408.25 433.01 40694.74 00:35:59.951 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:59.951 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:59.951 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:59.952 rmmod nvme_tcp 00:35:59.952 rmmod nvme_fabrics 00:35:59.952 rmmod nvme_keyring 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 536900 ']' 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 536900 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 536900 ']' 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 536900 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.952 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 536900 00:36:00.211 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:00.211 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:00.211 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 536900' 00:36:00.211 killing process with pid 536900 00:36:00.211 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 536900 00:36:00.211 05:51:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 536900 00:36:00.211 05:52:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:00.211 05:52:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:00.211 05:52:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:00.211 05:52:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:00.211 05:52:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:00.211 05:52:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:00.211 05:52:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:00.211 05:52:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:00.211 05:52:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:00.211 05:52:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.211 05:52:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:00.211 05:52:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:02.750 00:36:02.750 real 0m26.070s 00:36:02.750 user 1m1.049s 00:36:02.750 sys 0m6.665s 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:02.750 ************************************ 00:36:02.750 END TEST nvmf_bdevperf 00:36:02.750 ************************************ 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.750 ************************************ 00:36:02.750 START TEST nvmf_target_disconnect 00:36:02.750 ************************************ 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:02.750 * Looking for test storage... 00:36:02.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:02.750 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:02.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.751 --rc genhtml_branch_coverage=1 00:36:02.751 --rc genhtml_function_coverage=1 00:36:02.751 --rc genhtml_legend=1 00:36:02.751 --rc geninfo_all_blocks=1 00:36:02.751 --rc geninfo_unexecuted_blocks=1 00:36:02.751 00:36:02.751 ' 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:02.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.751 --rc genhtml_branch_coverage=1 00:36:02.751 --rc genhtml_function_coverage=1 00:36:02.751 --rc genhtml_legend=1 00:36:02.751 --rc geninfo_all_blocks=1 00:36:02.751 --rc geninfo_unexecuted_blocks=1 00:36:02.751 00:36:02.751 ' 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:02.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.751 --rc genhtml_branch_coverage=1 00:36:02.751 --rc genhtml_function_coverage=1 00:36:02.751 --rc genhtml_legend=1 00:36:02.751 --rc geninfo_all_blocks=1 00:36:02.751 --rc geninfo_unexecuted_blocks=1 00:36:02.751 00:36:02.751 ' 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:02.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.751 --rc genhtml_branch_coverage=1 00:36:02.751 --rc genhtml_function_coverage=1 00:36:02.751 --rc genhtml_legend=1 00:36:02.751 --rc geninfo_all_blocks=1 00:36:02.751 --rc geninfo_unexecuted_blocks=1 00:36:02.751 00:36:02.751 ' 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:02.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:02.751 05:52:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:09.324 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:09.325 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:09.325 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:09.325 Found net devices under 0000:af:00.0: cvl_0_0 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:09.325 Found net devices under 0000:af:00.1: cvl_0_1 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:09.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:09.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:36:09.325 00:36:09.325 --- 10.0.0.2 ping statistics --- 00:36:09.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:09.325 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:09.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:09.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:36:09.325 00:36:09.325 --- 10.0.0.1 ping statistics --- 00:36:09.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:09.325 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:09.325 ************************************ 00:36:09.325 START TEST nvmf_target_disconnect_tc1 00:36:09.325 ************************************ 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:09.325 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:09.326 [2024-12-13 05:52:08.700616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.326 [2024-12-13 05:52:08.700719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efe590 with addr=10.0.0.2, port=4420 00:36:09.326 [2024-12-13 05:52:08.700773] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:09.326 [2024-12-13 05:52:08.700800] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:09.326 [2024-12-13 05:52:08.700819] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:09.326 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:09.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:09.326 Initializing NVMe Controllers 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:09.326 00:36:09.326 real 0m0.114s 00:36:09.326 user 0m0.045s 00:36:09.326 sys 0m0.069s 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:09.326 ************************************ 00:36:09.326 END TEST nvmf_target_disconnect_tc1 00:36:09.326 ************************************ 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:09.326 ************************************ 00:36:09.326 START TEST nvmf_target_disconnect_tc2 00:36:09.326 ************************************ 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=541971 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 541971 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 541971 ']' 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:09.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:09.326 05:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.326 [2024-12-13 05:52:08.838518] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:09.326 [2024-12-13 05:52:08.838556] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:09.326 [2024-12-13 05:52:08.913854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:09.326 [2024-12-13 05:52:08.936169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:09.326 [2024-12-13 05:52:08.936205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:09.326 [2024-12-13 05:52:08.936212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:09.326 [2024-12-13 05:52:08.936218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:09.326 [2024-12-13 05:52:08.936223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:09.326 [2024-12-13 05:52:08.937732] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:09.326 [2024-12-13 05:52:08.937840] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:09.326 [2024-12-13 05:52:08.937947] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:09.326 [2024-12-13 05:52:08.937947] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.326 Malloc0 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.326 [2024-12-13 05:52:09.097773] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.326 [2024-12-13 05:52:09.126783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=541994 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:09.326 05:52:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:11.234 05:52:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 541971 00:36:11.234 05:52:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:11.234 Write completed with error (sct=0, sc=8) 00:36:11.234 starting I/O failed 00:36:11.234 Read completed with error (sct=0, sc=8) 00:36:11.234 starting I/O failed 00:36:11.234 Write completed with error (sct=0, sc=8) 00:36:11.234 starting I/O failed 00:36:11.234 Read completed with error (sct=0, sc=8) 00:36:11.234 starting I/O failed 00:36:11.234 Write completed with error (sct=0, sc=8) 00:36:11.234 starting I/O failed 00:36:11.234 Write completed with error (sct=0, sc=8) 00:36:11.234 starting I/O failed 00:36:11.234 Write completed with error (sct=0, sc=8) 00:36:11.234 starting I/O failed 00:36:11.234 Read completed with error (sct=0, sc=8) 00:36:11.234 starting I/O failed 00:36:11.234 Write completed with error (sct=0, sc=8) 00:36:11.234 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 [2024-12-13 05:52:11.158475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 [2024-12-13 05:52:11.158674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Read completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.235 starting I/O failed 00:36:11.235 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 [2024-12-13 05:52:11.158868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Write completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 Read completed with error (sct=0, sc=8) 00:36:11.236 starting I/O failed 00:36:11.236 [2024-12-13 05:52:11.159062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.236 [2024-12-13 05:52:11.159266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.159287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.159393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.159403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.159554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.159565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.159652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.159662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.159753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.159762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.159906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.159915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.159998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.160007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.160088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.160097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.160170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.160180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.160270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.160280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.160356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.160365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.160504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.160514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.160601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.160610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.160691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.160700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.160830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.160839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.160906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.160915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.160997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.161006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.161136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.161145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.161234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.161244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.161329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.161338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.161473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.161482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.161556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.161565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.161694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.161703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.161864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.161874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.236 qpair failed and we were unable to recover it. 00:36:11.236 [2024-12-13 05:52:11.161961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.236 [2024-12-13 05:52:11.161971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.162056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.162066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.162193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.162203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.162272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.162282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.162435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.162445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.162574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.162585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.162714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.162723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.162790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.162800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.162866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.162875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.162940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.162950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.163100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.163109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.163197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.163206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.163290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.163299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.163359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.163370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.163510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.163519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.163596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.163605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.163691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.163700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.163916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.163927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.163994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.164003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.164058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.164067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.164124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.164133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.164198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.164207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.164281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.164290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.164504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.164514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.164596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.164605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.164665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.164674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.164797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.164806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.164901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.164911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.164986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.164996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.165051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.165060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.165122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.165131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.165262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.165272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.165396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.165405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.165498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.165508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.165568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.165578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.165700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.165710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.165904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.165914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.165993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.166002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.166060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.237 [2024-12-13 05:52:11.166069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.237 qpair failed and we were unable to recover it. 00:36:11.237 [2024-12-13 05:52:11.166124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.166133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.166303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.166313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.166459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.166470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.166545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.166554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.166692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.166702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.166787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.166796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.166852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.166861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.166998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.167008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.167064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.167074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.167133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.167142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.167200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.167209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.167282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.167291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.167416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.167425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.167490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.167499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.167648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.167660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.167800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.167809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.167943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.167952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.168024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.168034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.168127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.168136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.168286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.168296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.168385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.168395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.168523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.168534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.168596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.168605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.168742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.168752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.168911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.168921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.168978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.168988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.169060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.169069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.169194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.169204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.169353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.169363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.169429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.169438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.169499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.169509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.169644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.169653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.169714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.169723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.169782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.169791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.169869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.169878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.169952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.169962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.170034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.170042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.170105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.170114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.170191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.238 [2024-12-13 05:52:11.170200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.238 qpair failed and we were unable to recover it. 00:36:11.238 [2024-12-13 05:52:11.170261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.170270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.170393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.170403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.170478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.170488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.170546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.170555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.170610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.170619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.170770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.170779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.170976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.170988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.171050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.171063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.171209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.171223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.171358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.171372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.171456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.171469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.171552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.171566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.171740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.171753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.171975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.171989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.172117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.172130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.172296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.172311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.172535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.172572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.172705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.172736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.172854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.172886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.172992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.173023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.173205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.173236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.173355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.173368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.173507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.173521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.173682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.173695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.173781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.173794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.173869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.173882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.173948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.173961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.174026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.174039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.174123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.174136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.174222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.174235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.174370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.174383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.174612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.174626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.174769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.239 [2024-12-13 05:52:11.174782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.239 qpair failed and we were unable to recover it. 00:36:11.239 [2024-12-13 05:52:11.174867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.174880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.174966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.174979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.175064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.175077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.175155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.175168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.175299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.175312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.175446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.175474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.175554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.175567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.175657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.175669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.175757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.175769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.175837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.175849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.176015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.176027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.176094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.176106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.176183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.176194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.176336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.176347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.176510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.176523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.176617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.176629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.176713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.176725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.176854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.176866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.176929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.176940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.177031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.177043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.177177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.177189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.177252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.177264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.177417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.177432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.177511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.177523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.177589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.177601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.177667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.177680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.177760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.177772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.177909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.177921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.177999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.178011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.178072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.178085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.178162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.178175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.178243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.178255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.178401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.178414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.178484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.178497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.178638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.178650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.178717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.178729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.178795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.178808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.178944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.178956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.179128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.240 [2024-12-13 05:52:11.179140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.240 qpair failed and we were unable to recover it. 00:36:11.240 [2024-12-13 05:52:11.179275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.179287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.179361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.179373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.179505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.179518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.179600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.179612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.179742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.179754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.179879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.179892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.179955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.179967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.180044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.180056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.180200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.180212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.180299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.180312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.180481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.180507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.180580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.180594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.180686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.180699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.180834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.180846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.181070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.181083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.181172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.181186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.181346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.181364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.181521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.181539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.181647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.181665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.181814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.181831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.181968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.181986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.182147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.182165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.182326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.182344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.182498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.182521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.182600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.182619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.182771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.182789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.182889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.182906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.183071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.183089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.183272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.183289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.183393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.183410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.183491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.183510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.183607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.183624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.183771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.183789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.183927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.183946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.184182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.184199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.184350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.184368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.184466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.184485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.184752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.184770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.241 [2024-12-13 05:52:11.184909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.241 [2024-12-13 05:52:11.184926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.241 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.185072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.185090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.185243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.185261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.185342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.185360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.185582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.185601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.185758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.185776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.185862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.185880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.186021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.186039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.186138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.186156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.186329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.186347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.186487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.186506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.186599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.186617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.186720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.186758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.186975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.186993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.187081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.187097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.187248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.187264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.187362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.187377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.187516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.187532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.187680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.187694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.187847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.187862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.188008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.188024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.188163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.188179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.188270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.188285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.188383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.188398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.188585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.188602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.188699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.188715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.188854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.188870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.188968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.188984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.189211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.189226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.189394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.189409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.189623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.189639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.189742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.189757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.189834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.189850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.190009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.190025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.190093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.190108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.190190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.190205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.190339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.190354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.190454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.190470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.190626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.190642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.190814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.242 [2024-12-13 05:52:11.190833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.242 qpair failed and we were unable to recover it. 00:36:11.242 [2024-12-13 05:52:11.190916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.190931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.191169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.191184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.191267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.191282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.191385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.191401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.191553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.191569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.191654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.191669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.191764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.191780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.191939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.191954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.192092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.192108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.192200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.192215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.192368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.192383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.192586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.192602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.192804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.192820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.192966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.192982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.193121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.193136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.193233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.193248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.193416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.193431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.193532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.193547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.193685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.193701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.193859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.193874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.193965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.193980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.194130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.194146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.194227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.194242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.194330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.194346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.194425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.194440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.194585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.194600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.194666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.194684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.194770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.194786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.194854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.194869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.194940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.194956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.195088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.195103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.195248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.195264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.195333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.195348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.195501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.195518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.195655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.195670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.195753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.195768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.195920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.195936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.196087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.196102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.243 qpair failed and we were unable to recover it. 00:36:11.243 [2024-12-13 05:52:11.196192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.243 [2024-12-13 05:52:11.196207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.196302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.196317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.196458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.196474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.196617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.196633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.196798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.196813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.197057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.197073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.197210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.197225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.197387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.197402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.197485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.197501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.197643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.197658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.197812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.197827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.197922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.197937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.198074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.198090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.198240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.198255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.198404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.198419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.198601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.198620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.198769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.198784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.198878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.198893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.198982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.198998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.199243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.199258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.199428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.199443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.199606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.199621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.199713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.199729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.199810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.199826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.199985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.200000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.200158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.200173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.200275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.244 [2024-12-13 05:52:11.200290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.244 qpair failed and we were unable to recover it. 00:36:11.244 [2024-12-13 05:52:11.200371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.200386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.200552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.200568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.200666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.200682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.200850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.200865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.201003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.201018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.201226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.201241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.201308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.201322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.201467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.201484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.201637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.201653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.201723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.201737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.201814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.201830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.201978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.201993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.202160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.202175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.202267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.202283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.202363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.202378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.202516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.202532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.202632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.202648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.202750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.202766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.202837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.202851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.203012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.203028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.203247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.203262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.203351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.203367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.203459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.203475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.203569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.203585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.203690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.203705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.203794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.203809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.203952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.203967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.204046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.204062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.204198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.204213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.204304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.204320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.204419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.204434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.204525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.204543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.204697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.204711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.204921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.204936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.205076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.205091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.205297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.205312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.205376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.205390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.205655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.205671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.205770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.205784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.245 [2024-12-13 05:52:11.205933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.245 [2024-12-13 05:52:11.205949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.245 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.206119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.206135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.206283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.206298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.206387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.206402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.206558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.206575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.206743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.206759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.206896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.206911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.207117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.207133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.207357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.207373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.207523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.207539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.207609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.207623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.207771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.207787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.207881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.207896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.207971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.207985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.208055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.208070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.208146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.208161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.208347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.208363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.208519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.208538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.208699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.208715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.208869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.208885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.209040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.209056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.209148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.209162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.209310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.209326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.209462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.209479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.209570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.209584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.209748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.209764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.209963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.209979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.210186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.210202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.210380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.210396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.210472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.210488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.210579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.210593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.210730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.210746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.210837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.210851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.211072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.211087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.211235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.211251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.211334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.211349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.211579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.211595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.246 [2024-12-13 05:52:11.211738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.246 [2024-12-13 05:52:11.211753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.246 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.211920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.211936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.212167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.212183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.212399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.212415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.212503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.212518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.212602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.212616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.212693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.212707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.212842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.212860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.213007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.213023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.213256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.213271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.213431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.213446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.213592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.213608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.213692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.213706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.213785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.213799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.213878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.213892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.214048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.214064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.214235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.214251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.214340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.214355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.214513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.214529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.214688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.214704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.214874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.214889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.214992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.215007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.215081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.215096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.215320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.215336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.215437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.215456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.215597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.215612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.215758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.215773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.215920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.215935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.216094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.216110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.216337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.216353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.216430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.216444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.216583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.216599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.216732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.216748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.216893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.216908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.217004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.217019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.217091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.217106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.217239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.217255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.217342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.247 [2024-12-13 05:52:11.217356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.247 qpair failed and we were unable to recover it. 00:36:11.247 [2024-12-13 05:52:11.217501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.217517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.217610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.217625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.217712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.217726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.217859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.217874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.218039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.218055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.218150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.218164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.218325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.218341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.218492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.218508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.218658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.218674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.218846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.218862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.219002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.219018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.219222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.219237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.219305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.219319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.219397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.219411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.219562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.219578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.219661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.219675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.219771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.219785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.219867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.219882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.220060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.220076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.220308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.220323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.220408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.220423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.220563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.220579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.220667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.220682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.220832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.220848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.220932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.220947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.221016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.221030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.221113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.221127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.221217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.221232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.221434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.221454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.221545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.221561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.221712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.221727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.221904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.221920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.222063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.222079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.222230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.222245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.222381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.222397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.222533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.222549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.222647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.222662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.248 [2024-12-13 05:52:11.222828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.248 [2024-12-13 05:52:11.222847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.248 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.222936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.222951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.223093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.223109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.223211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.223226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.223428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.223443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.223592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.223608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.223764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.223780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.223876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.223892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.224053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.224068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.224156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.224171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.224319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.224334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.224474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.224490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.224625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.224640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.224741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.224757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.224847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.224863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.224933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.224947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.225122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.225137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.225284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.225300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.225451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.225467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.225700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.225715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.225794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.225809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.226012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.226027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.226132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.226148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.226292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.226307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.226410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.226426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.226585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.226601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.226682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.226697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.226853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.226871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.227012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.227028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.227107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.227130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.249 qpair failed and we were unable to recover it. 00:36:11.249 [2024-12-13 05:52:11.227204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.249 [2024-12-13 05:52:11.227219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.227358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.227374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.227512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.227528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.227669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.227684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.227762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.227777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.227853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.227869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.228025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.228040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.228146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.228162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.228264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.228279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.228357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.228372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.228599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.228615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.228755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.228770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.228851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.228867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.228951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.228967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.229045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.229059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.229201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.229216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.229472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.229488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.229593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.229609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.229747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.229762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.229832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.229846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.229915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.229929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.230082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.230097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.230248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.230264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.230417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.230432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.230669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.230748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.230895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.230931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.231114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.231147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.231410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.231442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.231646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.231678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.231899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.231929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.232152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.232171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.232313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.232328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.232497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.232535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.232640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.232671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.232795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.232824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.250 qpair failed and we were unable to recover it. 00:36:11.250 [2024-12-13 05:52:11.233044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.250 [2024-12-13 05:52:11.233075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.233316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.233348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.233624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.233657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.233891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.233907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.234133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.234149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.234231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.234246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.234394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.234409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.234567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.234583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.234717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.234733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.234884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.234899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.235046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.235061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.235253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.235285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.235473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.235506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.235695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.235711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.235847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.235863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.236086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.236101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.236183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.236204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.236350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.236365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.236458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.236474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.236543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.236570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.236675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.236690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.236780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.236796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.236961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.236976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.237111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.237127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.237207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.237222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.237329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.237344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.237485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.237501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.237581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.237596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.237740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.237756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.237830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.237846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.238007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.238022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.238107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.238122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.238190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.238204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.238344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.238359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.238527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.238543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.238648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.238663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.238758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.238774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.238927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.251 [2024-12-13 05:52:11.238942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-12-13 05:52:11.239026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.252 [2024-12-13 05:52:11.239041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-12-13 05:52:11.239189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.252 [2024-12-13 05:52:11.239204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-12-13 05:52:11.239344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.252 [2024-12-13 05:52:11.239360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-12-13 05:52:11.239533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.252 [2024-12-13 05:52:11.239549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-12-13 05:52:11.239627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.252 [2024-12-13 05:52:11.239642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-12-13 05:52:11.239793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.252 [2024-12-13 05:52:11.239808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-12-13 05:52:11.239890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.252 [2024-12-13 05:52:11.239906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-12-13 05:52:11.240046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.252 [2024-12-13 05:52:11.240061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-12-13 05:52:11.240210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.252 [2024-12-13 05:52:11.240225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-12-13 05:52:11.240296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.252 [2024-12-13 05:52:11.240311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-12-13 05:52:11.240513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.252 [2024-12-13 05:52:11.240530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.240734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.240750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.240954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.240970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.241065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.241081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.241149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.241163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.241323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.241339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.241492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.241509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.241649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.241664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.241809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.241824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.242048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.242064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.242212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.242227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.242394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.242409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.242561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.242577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.242750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.242765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.242913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.242928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.243130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.243145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.243231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.243246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.243455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.243471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.243643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.243658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.243818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.243833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.243919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.243935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.244079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.244094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.244192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.244207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.244420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.244436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.244673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.244689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.244842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.539 [2024-12-13 05:52:11.244857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.539 qpair failed and we were unable to recover it. 00:36:11.539 [2024-12-13 05:52:11.244995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.245010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.245097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.245126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.245249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.245280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.245575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.245609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.245738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.245753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.245848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.245864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.245961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.245976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.246115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.246131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.246286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.246302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.246539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.246572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.246828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.246864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.246964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.246996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.247124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.247155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.247338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.247369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.247559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.247592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.247831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.247862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.248100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.248132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.248315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.248346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.248466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.248499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.248616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.248648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.248828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.248859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.248967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.248983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.249118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.249133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.249273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.249289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.249425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.249440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.249585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.249601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.249738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.249753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.249840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.249855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.250030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.250046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.250273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.250289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.250370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.250384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.250536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.250553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.250638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.250653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.250792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.250808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.250909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.250924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.251094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.251109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.251258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.540 [2024-12-13 05:52:11.251273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.540 qpair failed and we were unable to recover it. 00:36:11.540 [2024-12-13 05:52:11.251432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.251455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.251556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.251572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.251656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.251670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.251756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.251771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.251854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.251869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.252005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.252020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.252182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.252213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.252337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.252369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.252486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.252519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.252701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.252733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.252919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.252935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.253012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.253026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.253179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.253194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.253281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.253297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.253452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.253471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.253627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.253643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.253733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.253748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.253905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.253936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.254220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.254252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.254437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.254487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.254690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.254706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.254908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.254924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.255009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.255023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.255228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.255243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.255380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.255396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.255548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.255565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.255702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.255717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.255812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.255827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.255976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.255992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.256142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.256157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.256294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.256309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.256402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.256418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.256496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.256511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.256650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.256665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.256803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.256819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.256911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.256925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.257066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.257081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.257229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.257243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.257341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.257354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.541 qpair failed and we were unable to recover it. 00:36:11.541 [2024-12-13 05:52:11.257439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.541 [2024-12-13 05:52:11.257457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.257599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.257613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.257793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.257807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.257953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.257966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.258130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.258144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.258304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.258318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.258399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.258413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.258580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.258594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.258774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.258788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.258875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.258889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.259026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.259040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.259189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.259203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.259360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.259375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.259538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.259555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.259717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.259731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.259935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.259950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.260049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.260064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.260218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.260232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.260318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.260332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.260479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.260494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.260646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.260661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.260819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.260834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.260970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.260984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.261134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.261149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.261283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.261298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.261396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.261409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.261639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.261655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.261725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.261739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.261891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.261906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.261992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.262010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.262149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.262164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.262303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.262317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.262419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.262433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.262584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.262600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.262828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.262842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.542 qpair failed and we were unable to recover it. 00:36:11.542 [2024-12-13 05:52:11.262997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.542 [2024-12-13 05:52:11.263012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.263092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.263107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.263257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.263272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.263354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.263369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.263528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.263544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.263644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.263660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.263878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.263894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.263989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.264004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.264086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.264101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.264239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.264254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.264401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.264417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.264586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.264603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.264811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.264828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.264981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.264996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.265199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.265214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.265304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.265318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.265396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.265411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.265572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.265588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.265752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.265768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.265949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.265966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.266044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.266060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.266196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.266214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.266366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.266381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.266554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.266571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.266655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.266670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.266749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.266765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.266839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.266854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.267113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.267129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.267217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.267232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.267321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.267337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.267407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.267422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.267530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.267547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.267692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.267708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.267937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.267953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.268114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.543 [2024-12-13 05:52:11.268130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.543 qpair failed and we were unable to recover it. 00:36:11.543 [2024-12-13 05:52:11.268216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.268232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.268371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.268387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.268522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.268539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.268720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.268736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.268817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.268833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.268980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.268995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.269129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.269146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.269219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.269235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.269416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.269432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.269643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.269660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.269745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.269761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.269927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.269943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.270037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.270051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.270326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.270346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.270486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.270503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.270652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.270668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.270773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.270788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.270936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.270952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.271088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.271103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.271197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.271212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.271350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.271365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.271567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.271583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.271745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.271760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.271843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.271859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.272073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.272142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.272440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.272494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.272688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.272721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.272899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.272917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.273068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.273084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.273231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.273245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.273388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.273404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.273550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.273567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.273732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.273747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.273974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.273990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.274070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.274087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.274307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.544 [2024-12-13 05:52:11.274322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.544 qpair failed and we were unable to recover it. 00:36:11.544 [2024-12-13 05:52:11.274420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.274435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.274585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.274601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.274687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.274702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.274771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.274786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.274872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.274889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.274988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.275003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.275147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.275163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.275243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.275257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.275344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.275360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.275513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.275528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.275610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.275624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.275709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.275724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.275797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.275812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.275949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.275964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.276126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.276141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.276297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.276312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.276460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.276477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.276609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.276625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.276761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.276778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.276852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.276867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.277017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.277033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.277113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.277128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.277342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.277358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.277515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.277531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.277674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.277690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.277770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.277785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.277954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.278026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.278154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.278189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.278309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.278326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.278409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.278424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.278515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.278532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.278606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.278621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.278784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.278800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.278964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.278979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.279082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.279098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.279167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.279182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.279339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.279355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.279433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.279467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.279615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.279632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.545 [2024-12-13 05:52:11.279700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.545 [2024-12-13 05:52:11.279713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.545 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.279849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.279865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.279972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.279987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.280062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.280078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.280229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.280244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.280330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.280346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.280482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.280501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.280669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.280685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.280918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.280934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.281089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.281105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.281197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.281212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.281366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.281382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.281530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.281546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.281773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.281789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.281968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.281983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.282176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.282191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.282355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.282370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.282469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.282484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.282627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.282643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.282729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.282744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.282815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.282830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.282914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.282929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.282994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.283009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.283097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.283111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.283284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.283299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.283434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.283465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.283551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.283567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.283768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.283783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.283874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.283889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.284063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.284079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.284213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.284229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.284318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.284334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.284429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.284444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.284694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.284713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.284862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.284878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.284970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.284985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.285053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.285068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.285215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.285300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.285542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.285582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.546 qpair failed and we were unable to recover it. 00:36:11.546 [2024-12-13 05:52:11.285762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.546 [2024-12-13 05:52:11.285794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.285954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.285970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.286111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.286126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.286276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.286293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.286495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.286511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.286604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.286619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.286766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.286782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.286863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.286878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.286995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.287011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.287217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.287232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.287377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.287393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.287528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.287544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.287750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.287766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.287858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.287874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.287957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.287972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.288066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.288082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.288236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.288251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.288335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.288350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.288435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.288455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.288659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.288675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.288856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.288872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.288962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.288980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.289074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.289090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.289307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.289322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.289484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.289500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.289588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.289603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.289685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.289700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.289844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.289860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.289953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.289968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.290128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.290144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.290433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.290453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.290533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.290549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.290697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.290713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.290808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.290823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.290917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.290934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.291086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.291102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.291218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.547 [2024-12-13 05:52:11.291234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.547 qpair failed and we were unable to recover it. 00:36:11.547 [2024-12-13 05:52:11.291318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.291333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.291424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.291440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.291619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.291635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.291734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.291750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.291830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.291845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.292010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.292027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.292108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.292124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.292261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.292276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.292384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.292400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.292638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.292655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.292760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.292775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.292860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.292875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.293046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.293061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.293232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.293247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.293390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.293405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.293629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.293646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.293856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.293872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.294004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.294020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.294106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.294121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.294207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.294222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.294407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.294422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.294575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.294592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.294687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.294703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.294952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.294967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.295110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.295126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.295282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.295298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.295472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.295488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.295633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.295649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.295807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.295823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.295962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.548 [2024-12-13 05:52:11.295977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.548 qpair failed and we were unable to recover it. 00:36:11.548 [2024-12-13 05:52:11.296141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.296157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.296258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.296273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.296428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.296444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.296601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.296618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.296773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.296788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.296856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.296871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.296946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.296961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.297047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.297062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.297197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.297212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.297419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.297435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.297540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.297556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.297698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.297714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.297884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.297899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.298056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.298071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.298211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.298226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.298372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.298388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.298566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.298583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.298719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.298735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.298874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.298889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.298973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.298988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.299140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.299156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.299325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.299340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.299497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.299516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.299661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.299677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.299773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.299788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.300019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.300034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.300201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.300217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.300377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.300392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.300473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.300488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.300642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.300657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.300745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.300760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.300996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.301013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.301146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.301161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.301300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.301315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.301457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.301474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.301568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.301583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.301656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.301670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.301764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.301781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.301912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.549 [2024-12-13 05:52:11.301928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.549 qpair failed and we were unable to recover it. 00:36:11.549 [2024-12-13 05:52:11.302005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.302021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.302229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.302245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.302473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.302490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.302640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.302656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.302791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.302809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.302889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.302903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.303061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.303077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.303229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.303245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.303390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.303406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.303483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.303498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.303637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.303655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.303830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.303846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.303921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.303936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.304022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.304039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.304199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.304214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.304361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.304377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.304476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.304492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.304638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.304654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.304742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.304757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.304897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.304913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.305005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.305021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.305159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.305176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.305246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.305260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.305407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.305424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.305528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.305544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.305722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.305738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.305914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.305930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.306076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.306093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.306231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.306247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.306329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.306344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.306515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.306531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.306677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.306693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.306784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.306799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.306883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.306899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.306975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.306990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.307068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.307083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.307223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.307239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.307314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.307331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.307485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.307501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.550 [2024-12-13 05:52:11.307641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.550 [2024-12-13 05:52:11.307657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.550 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.307744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.307760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.307831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.307846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.307984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.308000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.308183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.308199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.308299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.308314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.308542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.308559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.308722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.308738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.308968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.308984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.309208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.309224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.309316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.309332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.309421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.309437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.309616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.309632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.309772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.309788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.309921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.309936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.310138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.310154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.310306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.310321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.310470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.310486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.310571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.310587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.310670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.310686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.310768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.310782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.310865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.310881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.311036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.311052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.311255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.311271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.311486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.311503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.311588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.311603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.311740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.311755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.311928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.311944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.312036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.312052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.312219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.312235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.312385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.312402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.312491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.312507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.312600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.312615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.312749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.312765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.312849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.312864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.313097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.313112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.313265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.313281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.551 [2024-12-13 05:52:11.313382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.551 [2024-12-13 05:52:11.313397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.551 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.313499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.313515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.313666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.313684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.313754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.313768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.313905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.313921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.314130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.314145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.314287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.314303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.314485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.314519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.314758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.314790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.315053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.315085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.315255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.315287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.315391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.315422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.315592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.315625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.315813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.315845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.316026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.316057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.316170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.316186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.316281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.316297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.316390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.316405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.316558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.316575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.316662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.316677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.316757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.316773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.316978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.316993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.317149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.317165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.317258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.317273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.317352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.317367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.317527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.317543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.317678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.317693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.317914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.317931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.318073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.318089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.318248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.318265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.318434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.318478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.318652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.318684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.318871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.318912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.319063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.319078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.319152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.319167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.319264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.319280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.319372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.319388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.319479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.552 [2024-12-13 05:52:11.319495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.552 qpair failed and we were unable to recover it. 00:36:11.552 [2024-12-13 05:52:11.319643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.319659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.319749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.319764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.319901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.319917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.320079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.320094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.320176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.320191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.320331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.320346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.320575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.320592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.320748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.320764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.320854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.320870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.321007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.321023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.321100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.321115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.321184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.321199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.321395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.321411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.321495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.321511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.321604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.321619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.321770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.321786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.321871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.321886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.322097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.322129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.322269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.322313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.322483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.322516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.322638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.322668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.322903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.322934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.323112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.323143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.323323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.323355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.323616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.323650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.323888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.323920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.324037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.324081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.324218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.324233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.324372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.324388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.324533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.324551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.324705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.324720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.324821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.324837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.553 [2024-12-13 05:52:11.324986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.553 [2024-12-13 05:52:11.325002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.553 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.325171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.325206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.325312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.325344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.325530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.325564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.325744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.325775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.325981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.326012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.326138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.326168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.326274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.326306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.326493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.326525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.326767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.326798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.327024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.327054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.327311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.327343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.327474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.327506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.327769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.327801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.328003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.328036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.328229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.328261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.328393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.328424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.328707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.328740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.329005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.329021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.329174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.329190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.329283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.329297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.329437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.329457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.329560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.329575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.329724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.329740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.329833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.329846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.329982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.329998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.330156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.330172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.330312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.330327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.330407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.330421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.330509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.330525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.330595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.330609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.330809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.330825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.330998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.331030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.331297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.331329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.331484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.331519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.331756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.331788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.332055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.332087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.332208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.332223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.332378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.332394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.332551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.554 [2024-12-13 05:52:11.332568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.554 qpair failed and we were unable to recover it. 00:36:11.554 [2024-12-13 05:52:11.332705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.332720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.332825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.332842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.332978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.332993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.333078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.333093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.333196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.333213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.333322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.333338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.333434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.333453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.333641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.333657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.333820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.333837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.333929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.333944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.334027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.334042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.334148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.334164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.334329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.334345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.334492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.334508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.334597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.334618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.334722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.334737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.334822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.334837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.334987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.335002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.335086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.335100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.335267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.335282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.335365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.335380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.335517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.335534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.335683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.335700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.335794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.335808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.335987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.336005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.336093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.336109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.336247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.336264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.336416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.336432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.336545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.336562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.336703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.336747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.336874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.336905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.337014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.337046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.337251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.337283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.337394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.337425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.337627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.337658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.555 [2024-12-13 05:52:11.337867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.555 [2024-12-13 05:52:11.337897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.555 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.338017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.338033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.338116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.338130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.338302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.338319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.338406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.338421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.338560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.338578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.338652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.338669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.338814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.338831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.338908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.338924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.339002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.339018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.339178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.339194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.339356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.339372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.339475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.339491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.339695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.339710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.339808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.339824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.339906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.339922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.340008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.340024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.340175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.340191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.340392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.340408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.340491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.340509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.340678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.340694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.340771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.340787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.340866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.340880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.341022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.341037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.341131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.341145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.341282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.341299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.341385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.341399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.341483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.341498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.341639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.341655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.341721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.341736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.341941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.341958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.342033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.342049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.342152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.342168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.342263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.342281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.342491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.342510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.342660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.342676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.342821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.342838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.342976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.556 [2024-12-13 05:52:11.342992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.556 qpair failed and we were unable to recover it. 00:36:11.556 [2024-12-13 05:52:11.343073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.343088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.343228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.343244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.343317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.343333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.343424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.343439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.343536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.343553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.343634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.343650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.343742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.343758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.343925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.343941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.344090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.344106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.344304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.344322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.344395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.344410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.344481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.344496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.344587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.344603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.344682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.344697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.344838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.344855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.345002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.345018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.345186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.345203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.345287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.345303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.345382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.345401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.345470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.345485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.345625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.345642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.345795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.345811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.345953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.345970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.346044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.346059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.346195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.346211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.346293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.346308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.346460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.346478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.346619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.346636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.346722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.346737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.346886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.346902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.346987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.347003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.347136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.347152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.347226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.347242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.347322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.347338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.557 [2024-12-13 05:52:11.347490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.557 [2024-12-13 05:52:11.347508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.557 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.347611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.347626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.347699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.347719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.347789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.347803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.347887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.347902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.347980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.347996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.348088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.348103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.348252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.348268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.348415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.348431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.348510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.348525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.348601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.348618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.348754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.348771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.348855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.348871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.349008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.349025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.349108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.349124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.349201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.349218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.349306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.349323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.349395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.349409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.349557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.349575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.349732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.349749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.349912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.349928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.350013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.350029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.350177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.350196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.350367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.350399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.350603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.350637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.350820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.350840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.350923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.350938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.351121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.351138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.351284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.351321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.351563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.351603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.351780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.351812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.351995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.352011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.352220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.352251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.352377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.352409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.352601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.352635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.352829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.352847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.352931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.352948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.353321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.558 [2024-12-13 05:52:11.353341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.558 qpair failed and we were unable to recover it. 00:36:11.558 [2024-12-13 05:52:11.353440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.353466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.353606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.353623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.353696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.353711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.353939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.353956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.354024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.354044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.354184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.354201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.354300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.354317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.354561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.354578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.354665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.354680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.354821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.354838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.354936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.354953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.355041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.355055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.355270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.355342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.355505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.355543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.355738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.355771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.355902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.355934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.356099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.356118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.356197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.356211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.356359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.356379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.356583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.356600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.356696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.356711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.356785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.356801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.356933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.356949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.357030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.357046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.357191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.357207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.357353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.357369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.357446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.357466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.357572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.357588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.357685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.357701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.357787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.357801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.357884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.357900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.358055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.358071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.358147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.358162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.358261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.358277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.358369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.358386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.358478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.358502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.358590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.358606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.358682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.358697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.358771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.559 [2024-12-13 05:52:11.358785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.559 qpair failed and we were unable to recover it. 00:36:11.559 [2024-12-13 05:52:11.358948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.358964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.359047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.359061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.359147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.359163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.359254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.359271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.359365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.359382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.359477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.359492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.359576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.359594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.359743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.359760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.359840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.359855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.359947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.359962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.360042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.360058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.360140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.360155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.360242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.360258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.360408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.360424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.360514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.360529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.360609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.360624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.360707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.360722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.360874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.360889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.360957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.360972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.361044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.361060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.361134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.361148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.361238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.361254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.361390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.361405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.361481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.361497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.361591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.361606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.361696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.361710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.361780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.361794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.361874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.361889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.362023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.362040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.362186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.362202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.362273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.362289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.362378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.362393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.362575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.362593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.362680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.362695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.362846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.362862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.362950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.362965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.363054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.363071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.560 [2024-12-13 05:52:11.363149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.560 [2024-12-13 05:52:11.363164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.560 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.363247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.363261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.363346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.363362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.363474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.363490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.363567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.363584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.363717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.363732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.363800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.363816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.363904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.363920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.364013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.364029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.364098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.364113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.364188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.364206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.364362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.364378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.364592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.364609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.364687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.364701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.364775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.364792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.364881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.364898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.364993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.365008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.365088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.365104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.365203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.365219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.365291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.365307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.365375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.365390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.365538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.365555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.365656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.365672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.365759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.365776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.365866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.365882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.365951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.365966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.366100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.366116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.366192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.366207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.366280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.366296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.366436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.366458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.366612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.366628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.366713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.366729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.366816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.366832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.561 qpair failed and we were unable to recover it. 00:36:11.561 [2024-12-13 05:52:11.366916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.561 [2024-12-13 05:52:11.366931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.367070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.367086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.367175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.367191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.367278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.367294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.367379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.367399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.367485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.367501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.367574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.367588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.367729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.367745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.367823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.367839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.367995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.368011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.368082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.368097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.368252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.368268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.368338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.368353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.368487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.368504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.368654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.368671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.368748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.368764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.368844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.368860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.368949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.368965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.369041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.369057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.369125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.369140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.369220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.369236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.369306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.369321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.369469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.369486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.369554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.369569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.369715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.369731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.369804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.369818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.369889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.369906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.369977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.369991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.370073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.370090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.370161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.370177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.370267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.370283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.370453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.370473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.370541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.370556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.370708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.370725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.370798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.370813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.370888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.370903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.371002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.562 [2024-12-13 05:52:11.371018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.562 qpair failed and we were unable to recover it. 00:36:11.562 [2024-12-13 05:52:11.371150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.371165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.371314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.371330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.371486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.371504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.371643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.371659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.371727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.371742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.371822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.371837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.372028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.372044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.372122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.372138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.372281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.372298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.372371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.372386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.372526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.372543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.372675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.372691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.372823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.372839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.372908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.372922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.372998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.373015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.373165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.373182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.373273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.373290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.373393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.373408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.373544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.373562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.373625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.373641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.373713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.373729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.373800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.373816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.373894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.373910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.373998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.374014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.374098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.374114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.374186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.374201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.374354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.374371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.374527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.374544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.374611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.374627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.374692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.374708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.374863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.374880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.374970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.374985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.375053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.375070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.375136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.375152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.375223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.375238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.375379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.375396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.375475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.375491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.375562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.375578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.375730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.375745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.563 [2024-12-13 05:52:11.375845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.563 [2024-12-13 05:52:11.375862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.563 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.375928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.375944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.376029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.376045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.376114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.376129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.376266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.376282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.376376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.376392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.376493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.376510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.376646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.376662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.376761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.376777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.376846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.376862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.377029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.377060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.377248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.377279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.377396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.377428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.377621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.377653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.377776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.377806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.377984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.378000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.378136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.378152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.378232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.378248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.378382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.378398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.378468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.378483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.378573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.378589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.378760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.378791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.378920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.378953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.379058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.379094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.379277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.379309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.379503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.379537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.379813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.379846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.380016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.380048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.380178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.380222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.380425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.380440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.380591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.380607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.380771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.380808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.380935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.380966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.381204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.381236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.381368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.381399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.381517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.381550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.381653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.381685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.381810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.381842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.564 qpair failed and we were unable to recover it. 00:36:11.564 [2024-12-13 05:52:11.382098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.564 [2024-12-13 05:52:11.382131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.382261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.382277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.382436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.382454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.382612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.382628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.382696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.382711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.382789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.382805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.382904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.382919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.382990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.383005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.383074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.383090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.383167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.383183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.383326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.383341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.383412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.383427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.383570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.383590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.383733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.383749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.383831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.383846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.383950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.383966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.384119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.384134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.384208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.384223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.384311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.384327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.384479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.384496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.384673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.384688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.384843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.384859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.385001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.385018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.385185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.385200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.385284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.385299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.385379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.385394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.385605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.385623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.385722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.385737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.385801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.385817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.385951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.385966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.386113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.386129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.386211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.386227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.386290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.386306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.386376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.386391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.386470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.386486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.386655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.386671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.386823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.386838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.386932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.386947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.387038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.387053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.565 [2024-12-13 05:52:11.387226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.565 [2024-12-13 05:52:11.387243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.565 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.387471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.387488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.387568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.387584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.387658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.387673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.387744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.387759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.387901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.387917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.387990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.388005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.388081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.388095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.388301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.388316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.388416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.388432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.388572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.388588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.388733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.388749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.388969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.388984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.389222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.389238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.389307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.389324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.389397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.389413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.389565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.389582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.389751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.389766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.389923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.389939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.390001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.390017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.390164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.390180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.390354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.390386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.390647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.390681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.390864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.390897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.391079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.391111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.391225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.391256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.391456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.391489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.391683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.391715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.391986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.392018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.392124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.392155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.392352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.392384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.392626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.392660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.392779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.392811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.566 qpair failed and we were unable to recover it. 00:36:11.566 [2024-12-13 05:52:11.392931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.566 [2024-12-13 05:52:11.392962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.393069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.393084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.393251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.393266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.393402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.393439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.393702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.393734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.393994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.394010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.394093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.394109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.394212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.394229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.394368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.394387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.394531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.394548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.394612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.394627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.394760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.394776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.394858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.394873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.395026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.395042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.395175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.395191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.395272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.395286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.395367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.395382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.395477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.395492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.395699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.395715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.395810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.395824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.395907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.395922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.396051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.396066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.396270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.396287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.396436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.396457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.396538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.396552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.396643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.396658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.396806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.396822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.396891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.396906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.396987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.397001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.397088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.397102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.397305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.397345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.397476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.397509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.397691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.397722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.397832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.397863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.397970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.398002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.398100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.398117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.398257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.398273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.398482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.398500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.398671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.398687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.567 [2024-12-13 05:52:11.398822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.567 [2024-12-13 05:52:11.398838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.567 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.398915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.398929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.399000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.399015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.399097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.399111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.399187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.399201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.399352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.399367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.399620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.399638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.399723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.399737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.399880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.399895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.400039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.400055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.400206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.400222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.400357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.400373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.400560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.400598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.400783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.400815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.400946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.400962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.401144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.401160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.401338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.401354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.401490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.401506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.401668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.401684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.401777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.401792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.401894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.401909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.401976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.401990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.402167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.402183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.402271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.402288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.402440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.402461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.402605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.402622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.402773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.402788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.402935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.402951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.403036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.403051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.403205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.403221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.403366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.403382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.403470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.403485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.403570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.403584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.403742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.403757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.403886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.403903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.404055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.404070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.404231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.404247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.404378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.404411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.404551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.568 [2024-12-13 05:52:11.404583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.568 qpair failed and we were unable to recover it. 00:36:11.568 [2024-12-13 05:52:11.404756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.404786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.404905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.404948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.405149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.405166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.405253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.405268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.405348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.405363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.405537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.405554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.405689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.405710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.405795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.405811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.405952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.405968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.406122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.406137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.406226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.406243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.406335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.406350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.406439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.406460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.406552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.406567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.406653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.406668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.406842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.406859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.406933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.406948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.407028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.407043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.407184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.407200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.407272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.407286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.407453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.407469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.407554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.407570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.407653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.407667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.407832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.407846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.408050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.408067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.408160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.408176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.408284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.408300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.408378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.408392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.408463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.408481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.408568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.408583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.408718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.408734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.408801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.408816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.408970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.408986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.409072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.409088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.409160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.409175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.409314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.409331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.409470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.409486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.409559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.409574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.409725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.569 [2024-12-13 05:52:11.409740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.569 qpair failed and we were unable to recover it. 00:36:11.569 [2024-12-13 05:52:11.409829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.409845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.409943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.409958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.410094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.410109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.410255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.410270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.410477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.410493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.410584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.410601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.410673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.410688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.410766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.410780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.410959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.410975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.411062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.411078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.411164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.411180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.411328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.411343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.411509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.411525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.411600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.411617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.411699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.411714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.411872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.411887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.411964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.411979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.412065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.412080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.412159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.412174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.412312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.412328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.412476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.412494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.412635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.412650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.412797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.412813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.412952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.412968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.413207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.413222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.413373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.413388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.413487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.413504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.413602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.413618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.413702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.413717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.413786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.413801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.413901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.413916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.414061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.414077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.414147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.414161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.414247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.414261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.570 [2024-12-13 05:52:11.414344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.570 [2024-12-13 05:52:11.414359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.570 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.414513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.414528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.414663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.414680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.414768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.414784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.414914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.414930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.415027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.415042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.415195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.415214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.415415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.415431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.415639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.415654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.415862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.415879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.416038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.416054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.416219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.416234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.416386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.416402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.416500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.416517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.416665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.416680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.416905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.416921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.417008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.417024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.417098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.417113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.417281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.417297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.417452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.417468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.417647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.417664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.417813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.417829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.417909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.417925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.418021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.418037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.418214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d55e0 is same with the state(6) to be set 00:36:11.571 [2024-12-13 05:52:11.418541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.418612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.418818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.418853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.418958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.418975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.419117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.419132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.419203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.419220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.419310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.419326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.419552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.419568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.419718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.419734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.419811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.419826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.419923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.419939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.420022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.420037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.420179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.420195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.420286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.420301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.420385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.420400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.420478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.420494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.571 [2024-12-13 05:52:11.420634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.571 [2024-12-13 05:52:11.420651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.571 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.420743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.420758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.420826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.420841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.420939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.420954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.421037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.421053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.421203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.421219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.421392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.421408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.421563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.421580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.421666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.421682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.421817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.421833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.421980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.421996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.422083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.422099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.422261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.422276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.422410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.422426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.422515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.422530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.422621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.422636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.422794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.422809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.422893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.422908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.423059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.423075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.423302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.423318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.423413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.423428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.423573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.423589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.423655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.423669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.423756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.423771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.423913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.423929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.423997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.424011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.424178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.424194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.424286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.424302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.424470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.424486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.424559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.424574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.424673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.424689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.424779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.424795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.424882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.424898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.425037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.425053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.425290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.425306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.425454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.425470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.425609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.425624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.572 [2024-12-13 05:52:11.425777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.572 [2024-12-13 05:52:11.425794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.572 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.425876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.425891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.425980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.426000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.426085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.426100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.426189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.426205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.426420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.426436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.426524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.426541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.426683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.426698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.426790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.426805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.426877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.426892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.427038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.427054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.427204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.427222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.427295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.427311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.427413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.427428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.427592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.427621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.427762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.427777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.427914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.427931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.428017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.428032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.428125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.428140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.428217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.428232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.428375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.428390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.428474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.428491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.428562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.428576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.428660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.428676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.428829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.428845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.428999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.429015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.429103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.429119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.429272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.429288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.429381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.429397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.429631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.429648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.429815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.429831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.430048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.430064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.430218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.430234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.430436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.430458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.430606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.430622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.430775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.430791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.430999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.431014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.431162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.431178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.431327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.431345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.431502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.573 [2024-12-13 05:52:11.431519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.573 qpair failed and we were unable to recover it. 00:36:11.573 [2024-12-13 05:52:11.431749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.431766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.431841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.431857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.432019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.432035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.432195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.432211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.432302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.432317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.432400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.432415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.432653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.432671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.432754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.432770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.432843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.432859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.432948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.432963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.433099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.433115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.433192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.433207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.433370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.433386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.433530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.433546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.433634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.433650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.433826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.433842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.434025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.434041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.434245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.434266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.434437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.434456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.434531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.434545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.434753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.434768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.434908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.434924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.434994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.435009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.435078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.435092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.435240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.435257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.435398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.435417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.435513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.435529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.435635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.435651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.435790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.435805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.435894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.435910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.436119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.436134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.436211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.436232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.436383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.436398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.436499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.436517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.436654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.436669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.436813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.436829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.436926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.436942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.437145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.437160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.437337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.437352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.574 [2024-12-13 05:52:11.437458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.574 [2024-12-13 05:52:11.437476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.574 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.437649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.437664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.437833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.437849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.437937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.437952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.438038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.438053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.438185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.438201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.438280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.438295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.438498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.438515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.438604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.438620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.438713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.438729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.438883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.438898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.439154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.439170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.439343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.439359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.439514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.439531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.439676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.439691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.439835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.439851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.439931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.439946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.440029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.440045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.440114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.440128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.440226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.440240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.440410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.440426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.440589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.440605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.440785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.440801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.440887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.440903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.441114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.441131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.441297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.441313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.441545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.441561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.441721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.441737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.441825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.441841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.441925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.441941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.442029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.442045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.442182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.442198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.442288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.442304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.442488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.442505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.575 [2024-12-13 05:52:11.442644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.575 [2024-12-13 05:52:11.442659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.575 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.442761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.442776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.442867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.442883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.443054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.443069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.443223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.443239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.443386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.443402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.443629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.443646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.443800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.443816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.443965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.443981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.444126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.444142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.444285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.444301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.444398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.444414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.444537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.444554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.444638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.444654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.444794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.444810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.444896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.444912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.445068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.445084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.445218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.445234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.445330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.445346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.445433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.445454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.445604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.445635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.445784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.445800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.445867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.445882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.446109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.446125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.446328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.446344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.446491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.446509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.446674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.446689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.446771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.446787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.446857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.446872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.446941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.446958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.447104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.447120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.447323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.447339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.447411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.447425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.447520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.447536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.447614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.447629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.447716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.447731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.447810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.447826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.447922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.447938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.448092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.448108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.448264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.576 [2024-12-13 05:52:11.448280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.576 qpair failed and we were unable to recover it. 00:36:11.576 [2024-12-13 05:52:11.448351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.448366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.448445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.448466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.448555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.448571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.448716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.448732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.448807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.448822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.448890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.448906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.448990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.449007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.449164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.449185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.449277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.449292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.449388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.449404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.449574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.449591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.449690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.449706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.449858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.449874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.450030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.450046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.450247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.450263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.450345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.450361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.450511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.450531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.450627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.450643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.450720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.450736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.450891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.450906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.451039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.451055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.451201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.451217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.451374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.451389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.451598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.451616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.451698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.451714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.451794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.451810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.451892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.451907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.451988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.452003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.452163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.452179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.452384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.452400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.452547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.452563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.452698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.452714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.452858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.452874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.453136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.453152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.453248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.453264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.453427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.453444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.453529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.453546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.453754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.453769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.577 [2024-12-13 05:52:11.453972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.577 [2024-12-13 05:52:11.453988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.577 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.454077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.454093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.454235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.454251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.454336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.454352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.454435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.454455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.454532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.454548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.454766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.454782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.454924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.454939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.455028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.455044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.455183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.455199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.455411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.455494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.455697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.455732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.455998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.456032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.456135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.456152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.456251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.456267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.456401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.456417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.456557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.456573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.456645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.456661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.456798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.456814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.456973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.456989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.457089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.457104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.457178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.457194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.457279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.457294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.457386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.457402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.457653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.457670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.457894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.457910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.457982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.457999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.458098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.458113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.458198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.458213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.458298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.458314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.458389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.458404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.458551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.458568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.458726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.458741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.458898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.458914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.459053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.459069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.459167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.459182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.459326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.459342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.459527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.459543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.459696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.459712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.578 [2024-12-13 05:52:11.459864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.578 [2024-12-13 05:52:11.459880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.578 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.460036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.460052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.460277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.460293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.460633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.460664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.460770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.460785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.460888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.460904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.461041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.461055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.461192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.461207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.461354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.461370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.461575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.461592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.461737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.461753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.461840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.461854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.461991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.462007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.462148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.462164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.462238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.462253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.462388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.462404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.462553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.462570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.462651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.462666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.462806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.462822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.463057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.463073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.463337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.463353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.463445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.463466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.463568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.463583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.463720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.463736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.463888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.463904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.464054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.464073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.464211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.464226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.464358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.464374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.464511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.464527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.464672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.464688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.464777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.464792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.464872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.464887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.465112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.465128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.465209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.465224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.465361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.465377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.579 [2024-12-13 05:52:11.465532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.579 [2024-12-13 05:52:11.465549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.579 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.465724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.465739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.465829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.465844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.465929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.465944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.466109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.466125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.466198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.466213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.466283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.466298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.466441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.466460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.466550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.466566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.466704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.466720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.466788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.466802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.466972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.466989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.467224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.467240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.467326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.467341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.467417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.467432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.467666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.467682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.467830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.467846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.467986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.468005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.468105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.468141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.468221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.468236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.468398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.468414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.468586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.468603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.468753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.468768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.468868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.468887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.469027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.469043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.469206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.469222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.469376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.469391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.469551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.469568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.469649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.469664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.469887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.469958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.470101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.470135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.470370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.470387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.470477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.470493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.470644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.470660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.470731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.470746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.470896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.470912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.471000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.471014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.471232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.471248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.471399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.471415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.580 qpair failed and we were unable to recover it. 00:36:11.580 [2024-12-13 05:52:11.471551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.580 [2024-12-13 05:52:11.471567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.471667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.471682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.471756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.471771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.471922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.471937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.472082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.472098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.472183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.472198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.472291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.472306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.472390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.472405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.472609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.472626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.472770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.472786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.472937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.472953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.473092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.473108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.473179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.473194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.473278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.473292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.473445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.473465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.473599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.473616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.473703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.473718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.473800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.473815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.473905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.473919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.473999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.474014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.474101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.474116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.474343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.474359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.474512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.474529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.474622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.474637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.474844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.474860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.474994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.475010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.475235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.475251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.475423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.475439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.475511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.475526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.475678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.475701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.475786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.475800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.475949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.475965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.476195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.476210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.476382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.476398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.476471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.476487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.476636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.476651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.476721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.476736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.476985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.477002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.477140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.477156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.477234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.581 [2024-12-13 05:52:11.477248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.581 qpair failed and we were unable to recover it. 00:36:11.581 [2024-12-13 05:52:11.477400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.477415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.477558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.477574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.477656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.477672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.477813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.477829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.477916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.477931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.478086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.478101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.478254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.478273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.478344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.478358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.478562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.478579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.478661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.478678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.478830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.478846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.478916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.478931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.479077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.479093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.479253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.479268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.479405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.479421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.479508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.479523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.479671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.479687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.479767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.479782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.479874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.479889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.480027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.480042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.480205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.480221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.480444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.480473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.480570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.480586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.480655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.480670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.480817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.480833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.480919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.480935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.481088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.481104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.481255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.481271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.481361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.481377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.481567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.481584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.481653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.481668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.481763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.481779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.481860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.481875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.481954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.481973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.482061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.482078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.482234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.482250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.482397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.482413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.482503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.482519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.582 qpair failed and we were unable to recover it. 00:36:11.582 [2024-12-13 05:52:11.482616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.582 [2024-12-13 05:52:11.482632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.482717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.482733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.482824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.482841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.482913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.482928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.482998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.483013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.483168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.483184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.483333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.483349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.483499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.483516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.483668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.483683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.483861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.483877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.483946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.483961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.484136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.484151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.484229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.484243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.484329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.484344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.484516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.484533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.484763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.484778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.484874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.484889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.484965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.484979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.485067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.485082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.485239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.485254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.485345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.485361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.485511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.485528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.485608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.485625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.485719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.485735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.485887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.485903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.486073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.486089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.486235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.486250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.486337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.486353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.486510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.486526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.486615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.486630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.486734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.486750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.486823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.486838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.486979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.486995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.487222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.487237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.487371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.487387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.487520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.583 [2024-12-13 05:52:11.487537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.583 qpair failed and we were unable to recover it. 00:36:11.583 [2024-12-13 05:52:11.487616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.487632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.487730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.487746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.487896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.487912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.488133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.488149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.488353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.488369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.488462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.488479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.488552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.488567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.488639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.488654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.488751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.488766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.488926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.488943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.489036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.489052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.489188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.489204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.489372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.489388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.489523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.489541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.489623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.489639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.489729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.489745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.489909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.489926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.490080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.490097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.490182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.490197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.490268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.490283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.490510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.490527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.490733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.490749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.490900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.490916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.491011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.491026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.491278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.491294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.491430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.491446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.491602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.491619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.491712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.491728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.491869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.491885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.491964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.491980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.492138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.492155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.492301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.492316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.492416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.492431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.492663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.492679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.584 [2024-12-13 05:52:11.492766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.584 [2024-12-13 05:52:11.492781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.584 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.492925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.492941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.493031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.493046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.493201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.493217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.493310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.493325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.493476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.493492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.493567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.493582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.493725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.493740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.493876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.493892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.493971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.493987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.494124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.494140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.494284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.494299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.494458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.494474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.494576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.494593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.494669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.494684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.494764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.494778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.494917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.494933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.495086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.495102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.495264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.495279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.495378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.495395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.495532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.495551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.495707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.495723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.495863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.495880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.496031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.496046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.496128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.496144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.496221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.496236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.496311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.496326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.496410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.496426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.496525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.496541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.496699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.496715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.496789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.496804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.496889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.496905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.496996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.497011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.497096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.497112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.497191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.497207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.497363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.497378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.497459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.497476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.497629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.497644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.585 [2024-12-13 05:52:11.497788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.585 [2024-12-13 05:52:11.497804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.585 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.497958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.497974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.498058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.498073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.498149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.498164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.498333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.498349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.498430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.498445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.498525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.498541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.498609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.498624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.498695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.498711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.498788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.498806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.498875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.498889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.499042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.499058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.499194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.499210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.499291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.499307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.499392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.499407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.499548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.499564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.499664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.499679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.499777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.499792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.499871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.499886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.500032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.500047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.500120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.500135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.500290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.500305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.500395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.500410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.500626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.500643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.500797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.500812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.500896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.500912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.500984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.500999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.501095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.501111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.501201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.501216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.586 [2024-12-13 05:52:11.501364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.586 [2024-12-13 05:52:11.501380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.586 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.501476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.501492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.501625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.501641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.501847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.501863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.501951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.501967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.502117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.502133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.502230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.502245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.502457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.502473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.502682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.502698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.502791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.502806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.502873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.502888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.502967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.502982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.503133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.503149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.503358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.503374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.503442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.503464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.503619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.503634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.503767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.503782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.503919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.503935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.504103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.504119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.504351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.504367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.504464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.504481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.504630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.504646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.504741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.504758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.504891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.504906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.504986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.505002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.505137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.505153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.505295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.505311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.505486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.505503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.505588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.505604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.505751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.505768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.505911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.505926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.506011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.506027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.506196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.506211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.506294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.506310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.506385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.506400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.506641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.506658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.506799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.506814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.507046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.507063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.507139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.507154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.507300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.587 [2024-12-13 05:52:11.507316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.587 qpair failed and we were unable to recover it. 00:36:11.587 [2024-12-13 05:52:11.507474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.507491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.507645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.507661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.507732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.507750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.507836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.507851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.507955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.507970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.508042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.508058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.508308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.508324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.508512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.508529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.508603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.508621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.508779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.508795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.508948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.508964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.509064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.509080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.509283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.509299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.509508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.509526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.509601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.509616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.509773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.509789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.509969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.509984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.510190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.510206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.510312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.510329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.510498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.510514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.510653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.510669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.510755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.510770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.510911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.510927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.511067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.511083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.511221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.511237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.511308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.511324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.511395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.511410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.511483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.511498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.511666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.511683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.511777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.511793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.511887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.511904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.512059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.512074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.512227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.512242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.512311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.512327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.512434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.512454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.512540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.512559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.512633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.512649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.512801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.512816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.512968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.512984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.588 qpair failed and we were unable to recover it. 00:36:11.588 [2024-12-13 05:52:11.513151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.588 [2024-12-13 05:52:11.513166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.513317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.513332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.513467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.513484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.513578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.513594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.513682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.513698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.513866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.513881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.514017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.514032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.514119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.514134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.514294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.514311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.514396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.514411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.514489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.514504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.514725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.514742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.514824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.514839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.514924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.514939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.515046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.515062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.515265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.515282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.515379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.515394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.515594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.515611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.515707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.515724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.515864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.515880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.515966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.515982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.516072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.516087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.516174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.516189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.516333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.516352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.516444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.516464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.516560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.516575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.516757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.516773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.516840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.516854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.516996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.517012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.517167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.517183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.517394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.517410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.517503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.517519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.517668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.517683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.589 [2024-12-13 05:52:11.517758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.589 [2024-12-13 05:52:11.517773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.589 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.517993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.518009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.518216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.518231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.518403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.518418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.518504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.518522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.518594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.518610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.518754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.518770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.518979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.518995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.519091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.519108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.519259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.519275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.519348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.519362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.519458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.519473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.519620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.519636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.519717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.519732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.519815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.519830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.519974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.519990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.520220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.520236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.520383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.520400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.520559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.520576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.520644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.520659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.520883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.520900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.520980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.520995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.521144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.521160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.521301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.521317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.521495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.521512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.521659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.521675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.521744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.521758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.521914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.521931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.522072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.522088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.522178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.522194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.522342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.522359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.522476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.522493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.522583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.522600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.522681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.522697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.522851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.522867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.523010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.523026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.523170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.523186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.523256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.523272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.523364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.523379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.523602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.590 [2024-12-13 05:52:11.523619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.590 qpair failed and we were unable to recover it. 00:36:11.590 [2024-12-13 05:52:11.523779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.523795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.523878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.523894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.523977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.523993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.524081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.524097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.524253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.524268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.524452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.524469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.524625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.524641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.524783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.524799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.524934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.524950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.525033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.525049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.525126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.525142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.525314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.525330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.525414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.525430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.525524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.525541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.525691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.525706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.525877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.525893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.526038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.526058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.526153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.526169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.526313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.526332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.591 [2024-12-13 05:52:11.526483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.591 [2024-12-13 05:52:11.526500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.591 qpair failed and we were unable to recover it. 00:36:11.878 [2024-12-13 05:52:11.526570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.878 [2024-12-13 05:52:11.526585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.878 qpair failed and we were unable to recover it. 00:36:11.878 [2024-12-13 05:52:11.526771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.878 [2024-12-13 05:52:11.526787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.878 qpair failed and we were unable to recover it. 00:36:11.878 [2024-12-13 05:52:11.526938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.878 [2024-12-13 05:52:11.526954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.878 qpair failed and we were unable to recover it. 00:36:11.878 [2024-12-13 05:52:11.527183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.878 [2024-12-13 05:52:11.527199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.878 qpair failed and we were unable to recover it. 00:36:11.878 [2024-12-13 05:52:11.527334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.878 [2024-12-13 05:52:11.527349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.878 qpair failed and we were unable to recover it. 00:36:11.878 [2024-12-13 05:52:11.527485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.878 [2024-12-13 05:52:11.527501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.878 qpair failed and we were unable to recover it. 00:36:11.878 [2024-12-13 05:52:11.527670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.878 [2024-12-13 05:52:11.527686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.878 qpair failed and we were unable to recover it. 00:36:11.878 [2024-12-13 05:52:11.527772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.878 [2024-12-13 05:52:11.527788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.878 qpair failed and we were unable to recover it. 00:36:11.878 [2024-12-13 05:52:11.527859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.878 [2024-12-13 05:52:11.527874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.878 qpair failed and we were unable to recover it. 00:36:11.878 [2024-12-13 05:52:11.527964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.878 [2024-12-13 05:52:11.527979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.878 qpair failed and we were unable to recover it. 00:36:11.878 [2024-12-13 05:52:11.528055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.878 [2024-12-13 05:52:11.528070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.878 qpair failed and we were unable to recover it. 00:36:11.878 [2024-12-13 05:52:11.528210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.878 [2024-12-13 05:52:11.528227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.878 qpair failed and we were unable to recover it. 00:36:11.878 [2024-12-13 05:52:11.528390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.878 [2024-12-13 05:52:11.528406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.528552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.528568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.528653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.528669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.528753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.528769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.528939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.528955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.529094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.529110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.529195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.529211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.529308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.529324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.529484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.529501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.529649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.529665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.529805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.529821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.529958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.529974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.530113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.530129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.530265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.530284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.530505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.530521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.530668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.530684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.530835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.530851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.530927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.530943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.531041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.531057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.531265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.531280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.531445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.531466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.531612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.531628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.531782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.531798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.531939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.531954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.532046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.532062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.532220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.532235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.532418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.532434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.532545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.532561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.532640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.532655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.532811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.532828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.532906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.532921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.533083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.533099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.533246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.533262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.533397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.533413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.533550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.533566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.533747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.533762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.533852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.533869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.534026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.534042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.879 [2024-12-13 05:52:11.534270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.879 [2024-12-13 05:52:11.534286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.879 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.534437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.534458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.534683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.534699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.534859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.534875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.535017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.535033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.535215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.535255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.535426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.535467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.535584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.535616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.535739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.535770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.535970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.536002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.536250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.536282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.536407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.536422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.536513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.536528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.536674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.536689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.536824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.536839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.536990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.537006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.537155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.537170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.537316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.537331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.537478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.537495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.537720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.537735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.537883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.537899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.538033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.538050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.538121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.538136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.538234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.538248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.538404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.538419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.538522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.538537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.538638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.538654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.538727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.538741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.538886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.538902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.539037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.539052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.539206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.539223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.539295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.539309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.539389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.539404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.539572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.539588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.539738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.539754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.539838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.539852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.540058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.540074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.540158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.540173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.540315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.540331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.540560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.880 [2024-12-13 05:52:11.540577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.880 qpair failed and we were unable to recover it. 00:36:11.880 [2024-12-13 05:52:11.540646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.540661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.540803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.540818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.540971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.540987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.541137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.541158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.541258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.541273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.541491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.541507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.541640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.541655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.541823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.541839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.541930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.541945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.542033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.542048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.542124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.542138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.542219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.542233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.542307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.542322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.542401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.542416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.542505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.542521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.542721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.542736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.542891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.542906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.542988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.543002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.543265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.543297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.543490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.543523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.543695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.543725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.543848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.543879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.544075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.544107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.544351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.544383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.544570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.544587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.544837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.544853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.544932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.544947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.545106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.545122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.545271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.545286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.545362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.545378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.545514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.545533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.545632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.545647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.545798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.545813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.545947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.545962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.546036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.546051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.546212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.546228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.546309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.546324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.546414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.546430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.546600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.881 [2024-12-13 05:52:11.546617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-12-13 05:52:11.546703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.546718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.546857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.546872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.547029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.547045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.547144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.547161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.547247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.547262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.547468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.547484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.547637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.547652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.547854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.547870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.547961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.547977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.548113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.548128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.548282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.548298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.548470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.548503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.548688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.548721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.548894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.548925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.549050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.549081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.549251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.549283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.549408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.549438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.549624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.549655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.549762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.549781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.550006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.550022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.550116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.550132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.550272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.550288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.550440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.550461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.550541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.550557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.550661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.550676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.550822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.550838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.551042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.551058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.551148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.551163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.551334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.551350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.551428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.551442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.551523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.551538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.551741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.551756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.551949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.551965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.552044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.552059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.552202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.552217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.552423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.552439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.552590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.552605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.552673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.552688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.552825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.552897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-12-13 05:52:11.553098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.882 [2024-12-13 05:52:11.553133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.553319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.553352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.553590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.553609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.553752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.553768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.553967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.553998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.554183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.554214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.554426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.554493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.554700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.554717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.554863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.554878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.555031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.555047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.555196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.555212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.555352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.555367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.555451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.555466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.555652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.555682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.555917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.555948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.556117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.556148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.556381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.556397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.556548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.556564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.556712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.556744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.556916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.556947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.557223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.557259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.557380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.557412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.557547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.557581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.557671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.557688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.557854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.557869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.558037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.558072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.558314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.558346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.558526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.558560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.558734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.558749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.559009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.559042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.559227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.559259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.559371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.559402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.559530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.559546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.883 [2024-12-13 05:52:11.559770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.883 [2024-12-13 05:52:11.559785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.883 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.559933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.559951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.560102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.560118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.560329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.560361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.560545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.560579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.560777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.560807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.561008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.561039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.561214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.561245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.561418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.561458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.561629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.561662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.561835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.561866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.562128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.562159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.562274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.562289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.562438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.562476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.562625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.562667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.562842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.562873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.563066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.563096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.563276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.563291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.563430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.563475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.563610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.563640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.563825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.563856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.563987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.564019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.564200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.564232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.564437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.564482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.564677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.564693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.564835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.564850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.564982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.564998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.565090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.565105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.565195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.565210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.565433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.565489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.565700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.565733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.565853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.565884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.566138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.566170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.566284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.566300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.566468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.566485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.566566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.566581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.566755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.566771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.566919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.566952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.567134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.567166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.567349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.884 [2024-12-13 05:52:11.567381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.884 qpair failed and we were unable to recover it. 00:36:11.884 [2024-12-13 05:52:11.567500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.567517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.567694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.567712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.567914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.567930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.568104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.568120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.568196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.568211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.568371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.568386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.568460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.568475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.568718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.568749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.568996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.569028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.569269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.569301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.569472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.569505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.569637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.569669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.569848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.569880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.570062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.570094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.570276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.570308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.570423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.570439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.570591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.570606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.570694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.570709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.570808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.570824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.570981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.570997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.571149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.571165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.571309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.571324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.571496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.571513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.571655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.571670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.571820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.571835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.571929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.571943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.572150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.572166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.572318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.572334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.572486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.572506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.572652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.572668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.572763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.572778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.572877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.572893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.572976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.572991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.573136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.573152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.573222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.573237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.573438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.573458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.573552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.573566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.573646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.573660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.573886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.573902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.885 [2024-12-13 05:52:11.573998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.885 [2024-12-13 05:52:11.574012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.885 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.574150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.574166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.574316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.574331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.574435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.574459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.574534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.574550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.574717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.574755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.574886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.574918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.575209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.575241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.575348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.575364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.575464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.575480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.575712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.575728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.575813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.575827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.575965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.575981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.576125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.576140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.576286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.576302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.576472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.576489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.576625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.576640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.576797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.576813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.576909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.576924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.577063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.577079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.577267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.577300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.577541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.577574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.577760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.577791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.577892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.577924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.578024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.578056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.578304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.578336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.578501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.578518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.578599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.578614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.578686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.578701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.578781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.578796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.578953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.578969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.579145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.579162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.579241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.579256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.579342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.579358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.579452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.579469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.579626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.579641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.579806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.579822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.579991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.580037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.580158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.580190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.580305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.580337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.886 qpair failed and we were unable to recover it. 00:36:11.886 [2024-12-13 05:52:11.580470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.886 [2024-12-13 05:52:11.580503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.580630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.580646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.580723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.580737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.580894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.580910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.581066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.581083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.581231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.581246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.581487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.581503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.581678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.581710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.581967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.582000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.582172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.582203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.582431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.582472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.582610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.582643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.582847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.582879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.583048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.583080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.583342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.583374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.583578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.583611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.583856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.583872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.584019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.584038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.584128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.584142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.584316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.584348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.584477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.584511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.584630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.584662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.584768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.584799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.584970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.585001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.585171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.585203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.585320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.585352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.585606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.585623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.585719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.585735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.585806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.585821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.585921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.585936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.586028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.586044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.586196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.586212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.586302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.586318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.586409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.586425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.586636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.586653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.887 qpair failed and we were unable to recover it. 00:36:11.887 [2024-12-13 05:52:11.586792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.887 [2024-12-13 05:52:11.586808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.586963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.586979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.587152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.587167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.587248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.587263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.587410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.587426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.587583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.587599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.587674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.587689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.587894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.587910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.587998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.588013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.588081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.588098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.588264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.588279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.588362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.588378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.588459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.588474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.588627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.588642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.588811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.588842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.589029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.589060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.589238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.589269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.589373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.589405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.589572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.589589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.589682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.589697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.589785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.589801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.589947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.589963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.590111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.590127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.590308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.590339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.590582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.590616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.590816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.590847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.590964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.590996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.591193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.591225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.591491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.591524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.591714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.591730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.591919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.591951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.592206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.592238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.592499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.592532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.592633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.592663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.592804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.592836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.592967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.592998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.593261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.593299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.593432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.593472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.593666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.593697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.593895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.888 [2024-12-13 05:52:11.593927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.888 qpair failed and we were unable to recover it. 00:36:11.888 [2024-12-13 05:52:11.594048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.594079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.594205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.594237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.594346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.594362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.594586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.594603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.594751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.594767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.594942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.594973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.595085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.595117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.595313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.595345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.595518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.595551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.595660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.595676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.595988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.596058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.596212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.596247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.596356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.596374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.596537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.596553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.596689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.596705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.596788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.596804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.596938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.596954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.597041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.597056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.597197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.597213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.597284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.597299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.597392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.597408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.597664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.597681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.597828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.597843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.597991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.598034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.598222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.598255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.598439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.598483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.598656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.598672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.598759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.598774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.598933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.598949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.599033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.599048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.599270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.599286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.599423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.599439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.599606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.599648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.599851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.599883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.600122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.600154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.600266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.600298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.600487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.600521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.600716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.600748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.600931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.600947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.889 [2024-12-13 05:52:11.601102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.889 [2024-12-13 05:52:11.601147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.889 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.601262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.601293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.601417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.601459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.601644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.601677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.601856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.601872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.601955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.601971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.602053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.602068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.602223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.602239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.602390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.602406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.602565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.602581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.602683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.602699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.602842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.602858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.602998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.603014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.603154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.603195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.603309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.603340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.603521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.603555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.603753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.603769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.603998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.604029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.604264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.604296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.604477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.604516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.604587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.604601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.604779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.604794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.604879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.604895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.605138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.605174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.605288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.605320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.605429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.605474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.605594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.605626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.605809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.605825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.605918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.605933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.606091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.606107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.606265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.606280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.606429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.606444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.606583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.606599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.606705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.606721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.606891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.606907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.606990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.607005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.607210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.607226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.607327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.607343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.607481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.890 [2024-12-13 05:52:11.607497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.890 qpair failed and we were unable to recover it. 00:36:11.890 [2024-12-13 05:52:11.607576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.607591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.607816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.607832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.607978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.607995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.608143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.608158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.608262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.608278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.608374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.608389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.608618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.608635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.608733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.608749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.608886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.608902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.609005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.609021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.609127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.609142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.609284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.609300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.609459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.609475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.609645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.609663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.609833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.609849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.610000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.610032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.610155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.610186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.610497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.610531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.610660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.610692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.610855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.610871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.611021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.611037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.611176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.611192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.611300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.611316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.611495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.611512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.611658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.611674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.611860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.611891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.612094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.612126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.612270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.612302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.612561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.612578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.612737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.612753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.612907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.612923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.613125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.613141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.613303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.613335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.613585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.613618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.613738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.613769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.613960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.613976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.614070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.614086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.614265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.614281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.614506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.614522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.891 [2024-12-13 05:52:11.614673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.891 [2024-12-13 05:52:11.614689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.891 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.614779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.614795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.615025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.615042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.615116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.615131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.615215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.615231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.615402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.615418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.615500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.615516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.615604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.615620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.615850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.615866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.616036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.616053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.616147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.616162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.616316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.616332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.616560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.616577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.616738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.616754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.616889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.616905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.616984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.616999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.617166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.617182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.617285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.617301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.617480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.617497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.617749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.617765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.617850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.617866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.617972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.617987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.618066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.618081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.618240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.618255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.618324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.618339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.618491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.618507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.618605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.618620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.618699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.618713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.618793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.618809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.618987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.619002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.619142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.619157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.619304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.619320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.619415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.619432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.619649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.619719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.619845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.619881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.892 [2024-12-13 05:52:11.619997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.892 [2024-12-13 05:52:11.620030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.892 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.620278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.620296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.620403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.620419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.620651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.620668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.620768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.620784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.620918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.620934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.621110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.621125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.621267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.621283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.621437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.621459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.621657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.621673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.621808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.621823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.621914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.621930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.622022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.622038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.622193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.622209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.622355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.622370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.622514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.622531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.622601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.622615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.622766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.622782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.622919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.622935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.623012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.623026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.623097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.623112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.623289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.623305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.623370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.623385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.623464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.623479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.623635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.623652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.623726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.623740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.623892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.623908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.623988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.624005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.624176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.624192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.624337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.624353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.624435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.624455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.624664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.624679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.624754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.624769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.624907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.624923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.625073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.625092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.625231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.625247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.625482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.625498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.625570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.625585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.625677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.625692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.893 [2024-12-13 05:52:11.625851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.893 [2024-12-13 05:52:11.625867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.893 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.626013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.626029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.626116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.626132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.626213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.626228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.626392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.626407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.626615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.626650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.626759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.626791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.627055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.627087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.627266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.627297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.627485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.627518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.627633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.627665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.627837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.627868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.628069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.628100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.628217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.628249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.628418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.628457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.628652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.628684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.628868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.628884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.628979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.628995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.629201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.629216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.629292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.629307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.629514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.629530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.629681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.629698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.629796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.629816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.629913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.629929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.630022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.630037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.630174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.630190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.630346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.630363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.630443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.630463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.630544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.630559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.630696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.630711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.630855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.630870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.631022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.631038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.631178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.631193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.631291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.631307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.631439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.631459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.631536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.631551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.631689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.631705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.631912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.631944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.632137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.632169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.632410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.632441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.894 qpair failed and we were unable to recover it. 00:36:11.894 [2024-12-13 05:52:11.632682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.894 [2024-12-13 05:52:11.632699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.632787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.632801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.632954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.632969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.633183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.633215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.633400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.633432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.633551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.633582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.633764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.633780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.633916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.633932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.634070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.634086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.634244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.634260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.634349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.634364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.634568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.634585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.634731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.634746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.634884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.634901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.635151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.635166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.635257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.635273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.635419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.635434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.635533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.635547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.635685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.635701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.635847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.635862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.636038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.636070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.638584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.638620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.638901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.638932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.639119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.639152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.639415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.639446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.639718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.639750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.639920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.639951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.640131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.640163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.640401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.640432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.640714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.640747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.640939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.640970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.641152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.641183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.641354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.641386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.641647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.641680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.641804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.641835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.642072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.642103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.642362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.642393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.642509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.642541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.642666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.642698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.642870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.895 [2024-12-13 05:52:11.642900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.895 qpair failed and we were unable to recover it. 00:36:11.895 [2024-12-13 05:52:11.643142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.643173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.643303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.643334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.643594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.643628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.643902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.643934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.644185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.644217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.644395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.644427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.644619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.644651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.644835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.644867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.645117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.645149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.645413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.645444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.645573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.645610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.645793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.645823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.645932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.645963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.646147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.646178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.646355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.646387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.646528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.646560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.646741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.646773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.646953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.646985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.647229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.647260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.647376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.647406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.647617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.647649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.647892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.647925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.648110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.648142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.648265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.648295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.648495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.648527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.648648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.648680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.648875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.648907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.649143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.649175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.649376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.649408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.649587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.649619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.649743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.649776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.649958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.649989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.650106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.650139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.650403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.650435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.650635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.896 [2024-12-13 05:52:11.650667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.896 qpair failed and we were unable to recover it. 00:36:11.896 [2024-12-13 05:52:11.650790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.650822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.651090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.651122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.651404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.651446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.651649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.651681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.651795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.651828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.652089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.652121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.652384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.652416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.652639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.652672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.652862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.652893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.653131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.653163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.653370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.653401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.653596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.653629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.653889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.653921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.654166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.654198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.654404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.654436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.654574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.654606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.654857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.654889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.655074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.655106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.655367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.655399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.655597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.655630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.655817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.655849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.656035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.656065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.656240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.656272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.656523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.656557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.656838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.656870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.657037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.657069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.657297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.657328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.657572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.657606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.657794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.657825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.658035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.658073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.658245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.658277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.658487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.658520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.658698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.658729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.658997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.659029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.659210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.659242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.659362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.659394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.659532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.659565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.659703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.659734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.659865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.659896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.897 [2024-12-13 05:52:11.660107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.897 [2024-12-13 05:52:11.660140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.897 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.660309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.660340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.660525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.660558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.660677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.660710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.660940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.660972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.661159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.661190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.661366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.661398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.661542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.661574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.661702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.661734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.661942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.661972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.662094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.662126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.662323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.662356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.662555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.662588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.662765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.662797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.662915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.662947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.663087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.663118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.663287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.663317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.663513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.663546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.663738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.663771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.663941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.663973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.664142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.664174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.664290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.664321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.664502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.664536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.664706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.664738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.664878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.664909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.665168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.665201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.665336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.665368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.665550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.665583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.665689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.665719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.665892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.665924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.666097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.666129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.666326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.666362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.666600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.666633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.666872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.666904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.667072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.667104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.667230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.667261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.667431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.667486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.667620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.667652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.667790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.667822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.668020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.898 [2024-12-13 05:52:11.668051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.898 qpair failed and we were unable to recover it. 00:36:11.898 [2024-12-13 05:52:11.668305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.668335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.668579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.668612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.668885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.668917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.669115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.669147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.669354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.669383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.669655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.669689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.669897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.669929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.670139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.670169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.670406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.670438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.670680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.670711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.670914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.670945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.671056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.671086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.671196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.671228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.671420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.671468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.671599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.671630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.671795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.671827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.671960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.671992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.672124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.672154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.672329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.672366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.672487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.672522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.672704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.672734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.672859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.672889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.673093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.673125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.673292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.673323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.673491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.673522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.673703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.673735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.673943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.673976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.674190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.674222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.674328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.674358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.674476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.674508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.674725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.674757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.674945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.674977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.675112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.675145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.675381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.675413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.675658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.675690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.675859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.675890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.676149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.676181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.676383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.676413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.676684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.899 [2024-12-13 05:52:11.676718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.899 qpair failed and we were unable to recover it. 00:36:11.899 [2024-12-13 05:52:11.676969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.677000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.677173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.677203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.677399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.677430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.677624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.677656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.677837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.677869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.678096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.678128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.678312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.678348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.678530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.678564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.678766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.678799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.678919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.678949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.679075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.679106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.679296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.679328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.679540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.679574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.679853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.679886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.680170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.680202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.680388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.680419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.680658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.680692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.680863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.680895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.681020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.681052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.681290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.681322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.681509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.681541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.681812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.681843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.682089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.682121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.682242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.682273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.682374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.682406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.682596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.682630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.682755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.682787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.682964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.682996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.683114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.683146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.683405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.683437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.683618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.683650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.683819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.683849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.684037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.684069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.684183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.684214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.684351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.684382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.684631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.900 [2024-12-13 05:52:11.684665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.900 qpair failed and we were unable to recover it. 00:36:11.900 [2024-12-13 05:52:11.684924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.684956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.685213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.685246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.685420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.685459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.685697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.685729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.685901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.685933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.686152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.686184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.686457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.686489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.686720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.686752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.686940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.686972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.687178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.687210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.687434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.687474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.687651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.687683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.687920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.687952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.688133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.688165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.688417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.688468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.688657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.688688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.688810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.688841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.688973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.689006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.689174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.689205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.689402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.689433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.689685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.689716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.689849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.689881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.690164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.690195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.690409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.690441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.690581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.690613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.690791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.690823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.691003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.691033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.691203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.691235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.691410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.691440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.691572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.691603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.691839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.691870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.691990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.692023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.692196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.692228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.692412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.692444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.692638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.692670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.692906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.692939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.693126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.693157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.693433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.693475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.693679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.901 [2024-12-13 05:52:11.693716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.901 qpair failed and we were unable to recover it. 00:36:11.901 [2024-12-13 05:52:11.693953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.693985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.694111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.694143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.694328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.694360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.694530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.694562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.694802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.694833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.695069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.695102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.695308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.695339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.695476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.695516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.695693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.695726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.695918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.695950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.696118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.696151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.696416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.696454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.696661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.696694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.696835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.696866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.697058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.697089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.697280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.697311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.697571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.697604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.697728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.697761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.697943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.697975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.698156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.698186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.698437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.698478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.698617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.698648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.698931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.698961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.699079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.699110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.699282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.699314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.699432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.699480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.699732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.699771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.699962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.699994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.700281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.700313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.700496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.700530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.700733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.700764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.700970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.701001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.701222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.701254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.701429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.701469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.701730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.701762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.701944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.701976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.702145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.702176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.702312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.702343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.702469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.702503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.902 qpair failed and we were unable to recover it. 00:36:11.902 [2024-12-13 05:52:11.702705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.902 [2024-12-13 05:52:11.702738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.702923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.702954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.703090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.703121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.703253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.703285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.703470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.703502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.703800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.703833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.704002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.704035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.704205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.704236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.704469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.704502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.704697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.704727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.704905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.704935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.705175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.705207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.705470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.705504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.705710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.705742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.705993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.706030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.706146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.706179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.706417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.706457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.706650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.706682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.706874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.706907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.707090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.707121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.707241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.707272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.707477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.707509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.707692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.707724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.707909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.707940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.708056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.708087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.708255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.708288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.708407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.708439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.708564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.708596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.708878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.708950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.709220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.709290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.709446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.709494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.709682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.709716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.709914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.709947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.710200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.710231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.710413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.710444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.710646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.710678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.710868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.710899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.711091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.711123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.711241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.903 [2024-12-13 05:52:11.711273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.903 qpair failed and we were unable to recover it. 00:36:11.903 [2024-12-13 05:52:11.711374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.711405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.711604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.711637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.711836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.711878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.712048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.712080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.712185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.712217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.712465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.712498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.712674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.712705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.712889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.712920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.713094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.713127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.713410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.713441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.713576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.713608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.713775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.713807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.713933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.713964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.714092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.714124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.714310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.714341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.714532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.714565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.714794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.714825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.715007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.715039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.715146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.715179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.715295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.715326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.715519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.715551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.715732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.715764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.715939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.715970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.716092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.716123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.716300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.716332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.716548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.716581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.716701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.716732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.716975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.717007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.717185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.717217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.717525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.717573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.717688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.717722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.717864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.717897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.717999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.718037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.718218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.718250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.718387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.718419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.718721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.718755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.719023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.719054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.719157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.719189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.719470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.904 [2024-12-13 05:52:11.719502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.904 qpair failed and we were unable to recover it. 00:36:11.904 [2024-12-13 05:52:11.719607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.719639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.719828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.719860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.719994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.720025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.720206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.720243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.720369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.720401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.720678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.720712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.720952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.720984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.721229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.721260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.721519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.721553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.721746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.721778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.721959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.721990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.722245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.722277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.722540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.722573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.722817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.722848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.723030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.723062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.723298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.723330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.723469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.723502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.723634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.723666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.723922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.723954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.724122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.724153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.724358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.724389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.724586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.724619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.724845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.724877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.725141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.725172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.725355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.725386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.725573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.725606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.725730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.725761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.725928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.725959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.726148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.726180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.726486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.726520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.726730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.726762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.726900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.726932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.727186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.727217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.727401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.727433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.727728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.905 [2024-12-13 05:52:11.727761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.905 qpair failed and we were unable to recover it. 00:36:11.905 [2024-12-13 05:52:11.727930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.727962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.728162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.728194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.728313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.728345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.728494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.728527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.728655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.728686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.728812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.728844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.729084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.729116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.729351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.729382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.729507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.729546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.729787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.729820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.729989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.730021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.730133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.730165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.730331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.730363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.730630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.730662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.730836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.730867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.731060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.731093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.731216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.731248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.731375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.731407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.731588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.731621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.731859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.731891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.732081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.732113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.732237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.732269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.732512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.732545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.732722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.732754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.732939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.732970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.733231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.733263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.733459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.733492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.733687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.733718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.733905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.733936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.734116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.734148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.734316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.734348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.734474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.734508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.734768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.734800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.735060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.735091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.735211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.735243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.735461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.735494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.735686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.735717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.735991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.736024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.736214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.906 [2024-12-13 05:52:11.736245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.906 qpair failed and we were unable to recover it. 00:36:11.906 [2024-12-13 05:52:11.736515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.736548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.736758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.736791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.736983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.737015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.737218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.737249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.737441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.737482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.737757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.737788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.738034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.738066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.738259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.738289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.738527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.738561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.738746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.738783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.738974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.739005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.739193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.739225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.739345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.739377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.739564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.739597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.739704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.739736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.739842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.739873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.740080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.740111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.740276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.740307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.740488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.740520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.740721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.740753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.740935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.740967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.741151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.741182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.741357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.741389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.741507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.741540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.741675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.741706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.741960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.741992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.742174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.742206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.742421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.742461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.742660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.742692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.742889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.742922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.743044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.743075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.743193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.743224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.743401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.743434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.743706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.743738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.743935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.743967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.744176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.744207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.744446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.744490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.744626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.744658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.744858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.907 [2024-12-13 05:52:11.744890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.907 qpair failed and we were unable to recover it. 00:36:11.907 [2024-12-13 05:52:11.745077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.745108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.745293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.745323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.745493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.745525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.745738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.745770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.745945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.745977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.746158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.746189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.746455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.746488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.746680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.746712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.746848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.746879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.747058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.747090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.747325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.747357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.747535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.747568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.747701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.747733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.747981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.748013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.748125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.748156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.748329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.748360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.748628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.748661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.748794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.748826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.749000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.749031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.749303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.749335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.749582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.749615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.749857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.749889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.750100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.750133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.750374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.750406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.750604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.750637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.750762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.750794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.751008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.751040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.751274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.751306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.751514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.751547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.751668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.751701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.751882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.751913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.752024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.752056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.752176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.752208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.752385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.752416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.752696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.752728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.753038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.753069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.753273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.753304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.753500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.753538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.753742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.753773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.908 qpair failed and we were unable to recover it. 00:36:11.908 [2024-12-13 05:52:11.753884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.908 [2024-12-13 05:52:11.753915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.754103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.754135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.754304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.754335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.754509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.754542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.754752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.754784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.755027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.755058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.755244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.755275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.755479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.755520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.755792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.755823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.755989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.756020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.756205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.756237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.756484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.756532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.756673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.756706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.756879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.756911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.757168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.757199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.757375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.757406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.757589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.757621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.757869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.757900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.758143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.758174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.758471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.758504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.758770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.758802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.759036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.759068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.759317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.759349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.759611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.759644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.759897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.759929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.760061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.760092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.760263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.760296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.760414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.760445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.760638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.760679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.760953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.760985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.761105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.761137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.761377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.761408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.761590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.761622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.761887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.761919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.762122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.909 [2024-12-13 05:52:11.762154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.909 qpair failed and we were unable to recover it. 00:36:11.909 [2024-12-13 05:52:11.762342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.762374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.762571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.762604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.762726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.762758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.762941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.762978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.763234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.763265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.763447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.763506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.763609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.763641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.763892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.763924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.764119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.764150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.764318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.764349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.764586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.764618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.764799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.764831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.765009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.765041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.765243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.765275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.765390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.765421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.765623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.765654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.765774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.765806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.765944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.765976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.766238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.766270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.766467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.766500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.766616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.766648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.766920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.766952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.767072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.767105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.767295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.767327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.767446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.767489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.767693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.767725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.767981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.768014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.768198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.768229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.768367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.768398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.768534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.768567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.768702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.768734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.768968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.769000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.769115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.769147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.769347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.769379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.769565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.769598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.769719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.769751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.769869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.769901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.770145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.770177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.770363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.910 [2024-12-13 05:52:11.770395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.910 qpair failed and we were unable to recover it. 00:36:11.910 [2024-12-13 05:52:11.770519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.770553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.770734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.770766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.771006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.771037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.771300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.771332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.771472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.771511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.771768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.771800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.771990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.772021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.772194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.772226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.772332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.772364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.772572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.772605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.772849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.772881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.772993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.773024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.773192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.773224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.773400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.773432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.773615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.773647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.773842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.773874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.774089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.774120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.774298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.774331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.774469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.774502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.774711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.774744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.774944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.774976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.775147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.775178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.775312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.775344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.775541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.775574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.775760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.775791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.776057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.776089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.776293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.776325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.776443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.776482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.776618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.776649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.776831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.776862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.776983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.777014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.777122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.777154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.777345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.777376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.777551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.777584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.777690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.777722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.777894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.777925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.778133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.778164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.778399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.778431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.911 qpair failed and we were unable to recover it. 00:36:11.911 [2024-12-13 05:52:11.778637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.911 [2024-12-13 05:52:11.778669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.778924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.778955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.779137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.779169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.779446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.779503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.779747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.779779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.779965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.779997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.780243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.780280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.780472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.780505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.780720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.780753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.781017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.781048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.781228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.781259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.781379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.781411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.781549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.781582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.781757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.781788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.781962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.781993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.782257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.782288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.782402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.782433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.782696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.782728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.782983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.783015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.783200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.783231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.783529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.783563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.783682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.783714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.783956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.783988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.784197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.784228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.784475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.784509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.784777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.784809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.784912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.784943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.785059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.785091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.785375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.785407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.785626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.785658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.785773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.785805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.786016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.786048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.786231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.786262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.786471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.786504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.786621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.786653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.786755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.786786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.786893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.786925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.787159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.787192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.787324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.787356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.787559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.787592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.787764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.787796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.787934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.787965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.912 [2024-12-13 05:52:11.788206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.912 [2024-12-13 05:52:11.788238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.912 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.788438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.788482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.788658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.788690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.788926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.788957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.789148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.789185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.789471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.789503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.789710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.789742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.789871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.789903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.790087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.790118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.790300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.790332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.790575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.790608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.790789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.790821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.791000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.791032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.791243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.791275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.791533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.791566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.791834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.791866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.792067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.792098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.792221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.792252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.792526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.792559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.792762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.792794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.793039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.793070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.793259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.793291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.793492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.793525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.793721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.793752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.793997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.794029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.794268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.794300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.794589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.794622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.794807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.794838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.795040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.795072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.795253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.795285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.795549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.795582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.795770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.795802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.795983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.796015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.796279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.796311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.796499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.796533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.796744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.796776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.796971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.797003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.797197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.797228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.797398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.797431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.797624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.797656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.797760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.797792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.798035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.798067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.798241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.913 [2024-12-13 05:52:11.798273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.913 qpair failed and we were unable to recover it. 00:36:11.913 [2024-12-13 05:52:11.798471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.798504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.798794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.798832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.799021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.799052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.799284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.799316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.799509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.799542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.799666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.799698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.799888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.799919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.800153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.800185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.800457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.800490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.800662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.800693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.800812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.800845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.800963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.800995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.801185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.801216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.801498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.801532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.801701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.801734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.801955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.801987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.802229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.802260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.802469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.802503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.802640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.802671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.802852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.802883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.803121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.803153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.803395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.803426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.803636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.803668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.803850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.803882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.803994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.804026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.804120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.804152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.804338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.804370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.804560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.804594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.804790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.804821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.805085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.805117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.805386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.805418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.805549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.805582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.805704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.805736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.805856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.805887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.806005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.806038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.806220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.806251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.806369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.914 [2024-12-13 05:52:11.806400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.914 qpair failed and we were unable to recover it. 00:36:11.914 [2024-12-13 05:52:11.806592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.806625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.806816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.806847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.807093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.807125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.807328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.807360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.807536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.807575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.807747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.807779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.807970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.808002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.808120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.808152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.808411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.808443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.808645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.808677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.808802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.808833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.808973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.809005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.809240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.809272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.809535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.809568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.809750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.809781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.809949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.809981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.810250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.810282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.810410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.810442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.810636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.810668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.810906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.810937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.811196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.811227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.811345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.811377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.811501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.811534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.811796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.811827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.811951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.811983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.812226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.812258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.812456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.812488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.812734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.812767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.812963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.812994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.813198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.813229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.813512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.813546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.813853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.813925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.814135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.814172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.814359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.814392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.814604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.814638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.814817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.814850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.814984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.815016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.815145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.815176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.815348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.815381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.815559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.815593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.815725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.815757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.815930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.815963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.816132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.816165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.816349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.816380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.816579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.816622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.915 qpair failed and we were unable to recover it. 00:36:11.915 [2024-12-13 05:52:11.816751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.915 [2024-12-13 05:52:11.816784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.816968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.817000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.817183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.817216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.817402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.817435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.817631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.817663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.817834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.817866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.818049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.818081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.818251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.818283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.818467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.818500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.818623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.818655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.818778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.818810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.818934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.818966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.819162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.819194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.819405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.819438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.819578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.819611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.819737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.819769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.820004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.820036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.820242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.820274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.820413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.820445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.820635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.820668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.820782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.820815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.821000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.821032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.821290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.821322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.821566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.821600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.821842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.821874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.822156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.822188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.822462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.822497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.822666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.822698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.822939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.822972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.823263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.823295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.823482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.823515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.823690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.823722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.823980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.824013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.824203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.824234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.824472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.824506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.824694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.824726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.824958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.824990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.825183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.825215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.825388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.825420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.825694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.825732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.825922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.825954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.826072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.826103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.826218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.826250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.826375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.826408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.826540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.826573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.916 [2024-12-13 05:52:11.826781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.916 [2024-12-13 05:52:11.826814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.916 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.827079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.827111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.827288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.827320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.827588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.827622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.827804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.827836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.828017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.828050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.828217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.828249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.828370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.828403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.828531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.828565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.828749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.828781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.829017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.829050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.829177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.829209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.829461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.829493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.829697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.829730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.829963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.829995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.830180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.830212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.830435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.830476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.830734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.830766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.830965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.830997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.831204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.831237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.831446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.831489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.831632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.831664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.831782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.831815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.832009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.832042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.832216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.832248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.832377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.832408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.832681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.832714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.832826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.832858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.832956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.832988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.833190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.833222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.833342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.833375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.833626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.833660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.833847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.833879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.834091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.834123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.834296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.834334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.834595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.834628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.834744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.834777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.834994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.835026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.835209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.835241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.835356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.835389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.835657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.835691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.835799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.835831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.836041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.836073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.836245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.917 [2024-12-13 05:52:11.836277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-12-13 05:52:11.836488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.836522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.836790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.836822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.836951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.836982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.837175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.837208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.837408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.837441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.837728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.837761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.837947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.837979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.838190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.838221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.838413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.838445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.838636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.838669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.838859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.838892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.839061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.839093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.839220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.839252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.839438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.839499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.839762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.839794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.839975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.840007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.840130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.840162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.840358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.840390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.840521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.840555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.840729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.840761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.840965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.840997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.841183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.841214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.841403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.841435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.841569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.841602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.841790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.841822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.842006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.842038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.842276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.842309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.842434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.842476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.842652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.842684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.842871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.842904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.843135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.843178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.843355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.843387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.843514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.843547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.843808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.843840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.844021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.844053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.844184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.844216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.844384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.844416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.844633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.844665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.844792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.844824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.845065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.845097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.845275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.845306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.845432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.845473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.845649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.845681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.845868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.845900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.846074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.846106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.846221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.918 [2024-12-13 05:52:11.846253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-12-13 05:52:11.846370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.846401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.846531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.846564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.846759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.846792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.847028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.847060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.847240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.847272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.847494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.847527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.847648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.847680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.847851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.847883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.848017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.848049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.848287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.848319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.848503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.848537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.848726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.848758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.848955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.848987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.849250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.849282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.849497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.849530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.849701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.849733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.849914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.849946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.850194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.850226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.850349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.850381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.850558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.850591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.850834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.850867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.851078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.851110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.851323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.851355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.851591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.851625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.851885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.851924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.852104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.852136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.852320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.852353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.852555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.852589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.852720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.852752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.852868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.852900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.853079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.853111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.853373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.853405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.853652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.853692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.853880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.853911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.854149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.854181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.854296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.854327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.854591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.854623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.854729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.854761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.855010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.855042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.855283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.855315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.855576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.855609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.855797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.855829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.856085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.856118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.856322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.856353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.856605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.856639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.856760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.856792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.856917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.856949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.857112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.857144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.857268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.919 [2024-12-13 05:52:11.857300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-12-13 05:52:11.857478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.857511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.857625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.857658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.857902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.857935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.858175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.858207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.858474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.858507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.858715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.858747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.858925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.858957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.859192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.859224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.859339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.859370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.859628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.859662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.859839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.859871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.860108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.860140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.860242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.860275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.860399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.860431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.860633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.860666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.860857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.860890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.861102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.861134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.861370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.861403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.861675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.861708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.861914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.861946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.862067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.862099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.862284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.862316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.862509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.862543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.862682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.862714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.862913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.862945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.863062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.863094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.863211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.863243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.863437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.863494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.863613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.863645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.863826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.863858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.864143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.864175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.864343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.864375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.864542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.864575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.864748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.864780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.864961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.864993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.865176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.865207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.865393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.865425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.865571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.865603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.865804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.865836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.866015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.866047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.866238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.866270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:11.920 qpair failed and we were unable to recover it. 00:36:11.920 [2024-12-13 05:52:11.866504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.920 [2024-12-13 05:52:11.866538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.866710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.866748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.866864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.866896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.867164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.867196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.867324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.867357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.867480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.867513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.867685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.867717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.867828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.867860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.868048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.868080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.868310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.868342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.868462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.868495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.868669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.868701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.868883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.868915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.869095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.869128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.869250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.869282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.869467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.869500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.869766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.869799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.869981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.870013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.870131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.870163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.207 [2024-12-13 05:52:11.870353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.207 [2024-12-13 05:52:11.870385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.207 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.870557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.870590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.870831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.870864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.871052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.871084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.871353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.871385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.871502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.871536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.871746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.871779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.871947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.871979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.872158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.872190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.872384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.872418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.872548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.872584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.872763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.872794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.872904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.872937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.873181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.873213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.873418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.873459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.873660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.873692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.873898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.873930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.874137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.874169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.874342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.874375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.874657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.874691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.874822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.874854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.875042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.875073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.875253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.875290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.875500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.875533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.875740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.875773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.875960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.875993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.876232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.876264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.876379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.876412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.876602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.876635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.876751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.876783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.877043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.877075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.877310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.877342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.877472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.877506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.877754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.877786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.878061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.878093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.878280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.878312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.878593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.878628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.878819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.878852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.879025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.879057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.208 [2024-12-13 05:52:11.879252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.208 [2024-12-13 05:52:11.879284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.208 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.879545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.879578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.879763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.879796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.879926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.879959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.880202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.880234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.880361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.880394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.880667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.880701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.880808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.880841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.881009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.881041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.881210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.881243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.881376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.881408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.881665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.881698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.881961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.881994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.882201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.882233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.882397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.882429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.882718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.882751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.882990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.883022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.883211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.883243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.883514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.883548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.883788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.883820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.883936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.883968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.884089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.884121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.884377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.884409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.884660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.884699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.884888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.884921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.885123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.885155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.885284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.885316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.885522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.885555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.885806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.885838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.886026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.886058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.886269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.886302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.886482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.886516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.886730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.886761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.886999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.887031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.887208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.887240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.887424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.887466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.887571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.887603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.887802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.887834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.888069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.888101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.209 qpair failed and we were unable to recover it. 00:36:12.209 [2024-12-13 05:52:11.888218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.209 [2024-12-13 05:52:11.888250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.888490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.888524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.888790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.888821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.889079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.889112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.889347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.889380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.889586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.889620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.889802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.889834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.890030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.890061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.890164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.890196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.890371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.890402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.890670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.890703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.890844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.890877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.891114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.891146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.891259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.891291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.891480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.891512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.891695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.891728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.891972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.892004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.892183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.892214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.892384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.892416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.892692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.892724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.892906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.892938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.893199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.893231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.893427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.893468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.893587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.893619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.893859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.893897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.894019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.894050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.894283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.894315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.894576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.894610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.894809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.894841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.895095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.895127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.895300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.895332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.895569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.895603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.895847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.895879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.896135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.896167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.896378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.896411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.896617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.896651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.896853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.896885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.897014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.897046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.897239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.897272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.210 qpair failed and we were unable to recover it. 00:36:12.210 [2024-12-13 05:52:11.897384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.210 [2024-12-13 05:52:11.897416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.897594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.897627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.897732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.897765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.897938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.897969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.898149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.898181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.898357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.898389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.898613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.898646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.898822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.898854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.899040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.899072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.899253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.899285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.899417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.899457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.899576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.899609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.899792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.899824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.900081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.900114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.900383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.900416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.900537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.900569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.900809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.900841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.901042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.901074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.901323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.901355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.901496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.901530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.901780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.901812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.901998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.902030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.902218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.902249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.902438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.902479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.902660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.902692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.902933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.902970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.903245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.903277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.903406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.903438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.903648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.903681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.903922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.903953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.904209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.904241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.904478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.904512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.904768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.904800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.211 [2024-12-13 05:52:11.904970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.211 [2024-12-13 05:52:11.905002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.211 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.905127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.905159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.905343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.905375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.905542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.905575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.905837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.905869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.906048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.906080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.906274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.906307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.906411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.906443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.906694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.906726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.906909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.906941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.907137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.907169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.907361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.907394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.907652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.907685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.907937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.907969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.908088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.908120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.908377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.908410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.908529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.908562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.908685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.908718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.908887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.908918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.909165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.909197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.909464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.909497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.909687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.909718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.909838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.909870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.910041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.910073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.910193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.910225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.910418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.910459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.910588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.910620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.910746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.910779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.911039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.911071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.911246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.911278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.911461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.911497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.911684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.911716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.911839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.911877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.912124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.912156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.912335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.912367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.912483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.912516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.912691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.912723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.912929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.912961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.913217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.913249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.212 qpair failed and we were unable to recover it. 00:36:12.212 [2024-12-13 05:52:11.913429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.212 [2024-12-13 05:52:11.913472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.913665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.913698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.913874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.913906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.914140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.914171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.914461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.914494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.914700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.914731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.914990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.915023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.915215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.915247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.915359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.915392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.915532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.915565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.915734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.915765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.915965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.915996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.916123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.916155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.916358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.916390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.916528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.916561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.916801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.916833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.916938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.916970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.917247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.917279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.917469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.917502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.917687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.917719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.917899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.917931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.918061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.918094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.918261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.918293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.918498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.918532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.918648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.918681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.918795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.918826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.918955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.918987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.919173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.919205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.919472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.919506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.919679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.919711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.919889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.919921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.920111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.920143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.920325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.920357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.920529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.920568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.920815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.920847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.921020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.921052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.921311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.921343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.921467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.921503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.921685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.921717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.213 [2024-12-13 05:52:11.921897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.213 [2024-12-13 05:52:11.921929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.213 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.922115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.922148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.922327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.922358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.922568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.922602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.922790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.922822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.922995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.923028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.923137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.923169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.923425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.923464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.923638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.923671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.923852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.923885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.924119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.924151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.924273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.924305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.924406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.924438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.924674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.924706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.924908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.924940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.925189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.925221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.925404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.925436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.925623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.925655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.925918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.925950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.926141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.926173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.926299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.926331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.926523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.926558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.926744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.926776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.926948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.926980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.927219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.927251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.927433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.927473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.927709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.927741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.927993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.928026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.928221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.928252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.928423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.928463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.928650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.928682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.928864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.928896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.929139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.929171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.929430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.929474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.929609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.929646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.929900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.929932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.930107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.930140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.930408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.930440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.930638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.930670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.214 [2024-12-13 05:52:11.930870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.214 [2024-12-13 05:52:11.930901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.214 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.931081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.931113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.931344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.931376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.931560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.931594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.931731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.931762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.932004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.932037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.932153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.932185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.932376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.932408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.932673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.932707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.932850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.932882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.932996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.933027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.933207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.933239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.933442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.933486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.933614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.933646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.933826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.933857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.933981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.934013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.934226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.934257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.934370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.934402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.934523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.934555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.934795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.934827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.935087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.935119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.935290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.935323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.935497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.935531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.935711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.935743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.935914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.935955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.936139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.936171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.936377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.936409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.936520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.936553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.936755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.936787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.937032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.937064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.937259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.937291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.937416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.937455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.937698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.937731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.937909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.937941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.938137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.938169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.938341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.938378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.938515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.938548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.938720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.938752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.938989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.939021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.215 [2024-12-13 05:52:11.939190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.215 [2024-12-13 05:52:11.939222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.215 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.939400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.939432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.939622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.939656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.939940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.939972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.940169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.940202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.940389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.940421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.940581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.940613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.940802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.940834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.941072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.941104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.941225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.941256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.941436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.941480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.941593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.941625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.941801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.941833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.942124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.942156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.942328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.942360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.942539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.942573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.942766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.942798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.942976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.943008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.943300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.943332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.943460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.943493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.943663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.943695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.943864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.943896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.944026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.944058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.944257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.944290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.944571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.944604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.944800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.944832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.944951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.944983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.945173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.945205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.945334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.945367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.945559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.945592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.945759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.945792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.945976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.946009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.946188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.946219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.946389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.946421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.946647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.216 [2024-12-13 05:52:11.946680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.216 qpair failed and we were unable to recover it. 00:36:12.216 [2024-12-13 05:52:11.946942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.946974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.947095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.947133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.947240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.947272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.947463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.947495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.947684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.947716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.947973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.948005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.948264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.948295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.948505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.948539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.948732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.948765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.948890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.948922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.949045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.949076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.949332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.949364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.949480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.949513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.949720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.949752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.949963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.949995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.950126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.950158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.950279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.950311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.950432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.950471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.950710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.950742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.950914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.950946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.951134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.951166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.951352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.951383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.951505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.951538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.951732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.951765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.951951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.951983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.952190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.952223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.952420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.952461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.952584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.952616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.952872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.952942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.953202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.953238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.953478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.953511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.953702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.953734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.953997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.954029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.954157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.954189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.954469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.954502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.954624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.954656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.954788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.954820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.955063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.955095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.217 [2024-12-13 05:52:11.955279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.217 [2024-12-13 05:52:11.955311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.217 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.955495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.955528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.955634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.955666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.955876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.955916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.956109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.956141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.956321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.956353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.956544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.956578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.956819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.956852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.957039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.957070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.957282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.957314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.957497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.957538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.957666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.957698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.957892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.957924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.958168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.958200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.958384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.958417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.958634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.958667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.958903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.958934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.959154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.959187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.959397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.959429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.959717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.959749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.959870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.959902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.960140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.960172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.960359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.960391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.960650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.960682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.960822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.960855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.961115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.961147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.961261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.961293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.961477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.961510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.961779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.961810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.961995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.962027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.962210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.962248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.962493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.962526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.962639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.962670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.962782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.962814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.963048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.963079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.963316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.963348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.963470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.963503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.963730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.963762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.964027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.964059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.964181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.218 [2024-12-13 05:52:11.964212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.218 qpair failed and we were unable to recover it. 00:36:12.218 [2024-12-13 05:52:11.964473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.964520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.964770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.964801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.965004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.965036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.965162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.965200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.965376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.965408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.965617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.965650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.965752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.965784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.966010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.966041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.966171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.966203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.966421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.966463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.966639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.966670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.966873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.966905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.967080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.967112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.967355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.967387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.967579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.967612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.967903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.967935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.968119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.968150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.968418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.968458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.968630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.968662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.968931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.968963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.969153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.969185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.969325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.969357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.969484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.969517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.969695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.969727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.969901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.969933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.970174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.970205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.970467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.970500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.970689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.970720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.970922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.970953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.971126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.971158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.971414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.971457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.971653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.971686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.971801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.971833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.972019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.972051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.972239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.972270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.972472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.972505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.972695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.972727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.972847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.972878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.219 [2024-12-13 05:52:11.973009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.219 [2024-12-13 05:52:11.973040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.219 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.973233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.973264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.973442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.973484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.973617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.973650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.973770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.973801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.973920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.973958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.974079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.974110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.974241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.974273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.974472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.974505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.974681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.974712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.974890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.974922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.975030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.975062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.975245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.975277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.975483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.975516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.975635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.975667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.975843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.975876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.976136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.976167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.976338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.976370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.976611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.976644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.976854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.976886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.977150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.977182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.977363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.977395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.977686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.977719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.977839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.977870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.978006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.978037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.978274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.978306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.978508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.978548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.978728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.978760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.978943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.978975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.979158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.979191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.979444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.979486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.979660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.979692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.979950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.979988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.980165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.980196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.980408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.980440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.980635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.220 [2024-12-13 05:52:11.980667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.220 qpair failed and we were unable to recover it. 00:36:12.220 [2024-12-13 05:52:11.980855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.980887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.981125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.981156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.981261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.981293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.981409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.981442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.981748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.981781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.981985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.982017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.982219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.982251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.982518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.982551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.982740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.982772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.983027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.983059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.983302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.983335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.983598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.983631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.983816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.983848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.983965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.983997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.984128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.984160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.984341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.984372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.984499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.984532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.984734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.984765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.984957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.984989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.985271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.985304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.985485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.985518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.985638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.985670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.985908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.985940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.986072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.986105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.986298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.986330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.986445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.986485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.986597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.986629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.986748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.986780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.986907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.986938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.987152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.987183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.987356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.987387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.987646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.987679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.987895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.987926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.988144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.988177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.988346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.988378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.988549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.988582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.988715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.988752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.988963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.988994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.989125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.221 [2024-12-13 05:52:11.989156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.221 qpair failed and we were unable to recover it. 00:36:12.221 [2024-12-13 05:52:11.989341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.989372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.989555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.989589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.989832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.989864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.989982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.990014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.990196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.990228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.990422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.990464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.990599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.990630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.990809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.990841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.991013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.991044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.991247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.991278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.991389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.991421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.991615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.991648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.991832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.991863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.992047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.992079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.992267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.992300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.992473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.992506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.992694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.992726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.992847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.992880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.993068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.993099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.993338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.993370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.993505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.993538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.993715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.993747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.993959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.993991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.994095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.994126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.994416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.994455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.994719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.994751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.994865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.994896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.995079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.995111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.995229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.995261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.995484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.995517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.995774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.995806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.995919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.995951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.996143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.996174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.996409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.996441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.996642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.996675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.996911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.996943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.997072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.997104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.997222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.997265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.997474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.222 [2024-12-13 05:52:11.997508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.222 qpair failed and we were unable to recover it. 00:36:12.222 [2024-12-13 05:52:11.997694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:11.997726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:11.997850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:11.997881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:11.998006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:11.998038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:11.998299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:11.998330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:11.998546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:11.998579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:11.998694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:11.998726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:11.998857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:11.998888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:11.999015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:11.999048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:11.999262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:11.999294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:11.999536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:11.999569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:11.999737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:11.999769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:11.999890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:11.999922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.000201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.000233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.000418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.000457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.000697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.000729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.000917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.000949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.001144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.001176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.001291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.001323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.001500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.001534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.001744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.001776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.001963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.001995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.002191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.002222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.002475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.002508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.002616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.002647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.002826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.002858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.003127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.003159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.003325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.003357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.003487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.003521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.003756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.003787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.003912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.003944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.004059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.004091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.004279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.004311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.004439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.004481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.004665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.004697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.004830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.004862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.005101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.005134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.005400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.005432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.005561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.005594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.005711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.223 [2024-12-13 05:52:12.005749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.223 qpair failed and we were unable to recover it. 00:36:12.223 [2024-12-13 05:52:12.005937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.005969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.006099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.006131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.006237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.006269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.006464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.006496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.006670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.006702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.006825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.006857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.007049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.007081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.007317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.007349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.007534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.007567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.007759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.007792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.008044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.008076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.008192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.008223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.008394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.008426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.008701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.008733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.008859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.008891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.009070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.009102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.009370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.009402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.009528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.009561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.009681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.009713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.009975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.010006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.010141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.010173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.010347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.010379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.010579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.010611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.010876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.010907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.011081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.011112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.011374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.011406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.011631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.011664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.011844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.011876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.012119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.012150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.012337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.012369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.012500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.012534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.012769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.012801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.012912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.012944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.013081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.013112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.013228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.013259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.013443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.013505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.013692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.013724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.013899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.013930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.014050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.014082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-12-13 05:52:12.014267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.224 [2024-12-13 05:52:12.014304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.014407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.014439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.014561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.014593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.014774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.014806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.014976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.015007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.015200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.015232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.015423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.015462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.015712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.015745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.015914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.015946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.016076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.016107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.016366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.016397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.016539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.016572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.016754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.016785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.016894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.016926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.017040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.017073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.017282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.017314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.017503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.017537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.017820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.017853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.018024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.018056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.018166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.018198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.018317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.018349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.018480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.018512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.018626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.018659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.018770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.018803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.018909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.018941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.019074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.019105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.019311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.019343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.019620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.019653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.019772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.019804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.020007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.020039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.020311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.020342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.020590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.020624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.020799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.020831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.021043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.021074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.021193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.225 [2024-12-13 05:52:12.021225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-12-13 05:52:12.021402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.021434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.021636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.021669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.021802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.021834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.021969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.022001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.022182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.022214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.022429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.022478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.022665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.022696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.022810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.022842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.023106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.023138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.023247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.023278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.023468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.023502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.023623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.023655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.023854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.023885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.024130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.024162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.024293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.024325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.024440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.024481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.024669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.024702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.024891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.024923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.025025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.025057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.025301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.025333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.025553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.025587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.025716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.025748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.025932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.025964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.026078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.026111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.026232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.026264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.026525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.026558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.026796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.026829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.027000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.027031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.027132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.027164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.027410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.027443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.027619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.027651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.027887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.027919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.028132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.028164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.028424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.028463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.028672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.028704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.028835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.028867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.029126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.029157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.029338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.029370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.029621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.226 [2024-12-13 05:52:12.029655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-12-13 05:52:12.029781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.029813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.030033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.030064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.030253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.030285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.030522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.030555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.030759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.030791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.030905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.030937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.031174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.031212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.031391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.031423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.031670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.031703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.031888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.031920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.032036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.032068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.032272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.032304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.032492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.032524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.032729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.032761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.032928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.032960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.033071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.033102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.033337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.033369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.033540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.033573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.033761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.033793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.033994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.034026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.034163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.034195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.034303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.034335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.034526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.034559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.034672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.034703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.034907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.034939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.035126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.035158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.035339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.035371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.035557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.035590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.035827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.035859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.036043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.036075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.036317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.036350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.036466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.036506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.036693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.036724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.036900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.036933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.037055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.037087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.037352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.037384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.037593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.037626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.037798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.037830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.038004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.227 [2024-12-13 05:52:12.038036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.227 qpair failed and we were unable to recover it. 00:36:12.227 [2024-12-13 05:52:12.038173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.038205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.038309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.038340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.038464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.038497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.038606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.038638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.038894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.038925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.039214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.039247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.039371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.039403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.039703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.039746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.039875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.039908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.040031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.040063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.040236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.040268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.040534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.040567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.040810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.040842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.041109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.041142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.041245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.041276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.041479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.041511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.041784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.041816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.042007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.042039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.042160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.042191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.042466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.042499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.042763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.042796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.043054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.043086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.043335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.043367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.043537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.043569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.043748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.043779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.043961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.043992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.044113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.044144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.044270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.044302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.044565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.044598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.044702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.044733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.044860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.044892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.045066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.045098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.045307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.045339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.045517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.045549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.045735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.045768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.045946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.045978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.046169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.046200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.046329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.046362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.046545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.228 [2024-12-13 05:52:12.046578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.228 qpair failed and we were unable to recover it. 00:36:12.228 [2024-12-13 05:52:12.046693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.046725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.046906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.046938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.047110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.047142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.047322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.047354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.047596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.047630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.047799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.047831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.048005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.048036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.048158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.048190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.048483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.048521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.048785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.048817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.049022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.049054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.049252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.049284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.049525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.049559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.049757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.049789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.050050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.050082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.050267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.050299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.050482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.050516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.050653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.050685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.050791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.050823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.050996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.051028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.051294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.051327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.051457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.051490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.051632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.051665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.051846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.051878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.052070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.052102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.052270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.052302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.052485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.052518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.052723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.052755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.052887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.052919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.053039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.053071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.053240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.053272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.053393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.053425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.053633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.053665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.053839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.053871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.054063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.229 [2024-12-13 05:52:12.054094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.229 qpair failed and we were unable to recover it. 00:36:12.229 [2024-12-13 05:52:12.054281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.054313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.054506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.054539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.054777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.054808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.054995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.055027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.055217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.055248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.055385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.055417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.055634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.055667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.055903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.055935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.056119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.056151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.056323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.056354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.056488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.056521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.056703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.056735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.056915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.056947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.057207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.057244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.057428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.057477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.057715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.057747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.057879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.057911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.058085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.058117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.058255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.058287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.058418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.058460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.058583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.058616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.058737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.058768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.059026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.059058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.059248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.059280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.059479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.059512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.059680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.059712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.059849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.059881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.060077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.060109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.060297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.060329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.060444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.060486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.060703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.060735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.060979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.061010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.061226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.061258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.061387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.061420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.061656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.061689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.061931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.061964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.062090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.062122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.062239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.062271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.062398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.230 [2024-12-13 05:52:12.062430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.230 qpair failed and we were unable to recover it. 00:36:12.230 [2024-12-13 05:52:12.062608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.062640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.062823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.062855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.063045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.063076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.063279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.063311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.063579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.063612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.063821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.063852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.064116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.064148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.064282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.064314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.064447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.064486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.064749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.064781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.064970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.065002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.065173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.065204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.065391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.065423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.065621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.065654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.065871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.065908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.066099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.066131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.066308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.066340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.066528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.066561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.066768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.066800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.066992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.067024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.067196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.067228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.067349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.067380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.067480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.067512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.067702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.067735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.067955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.067986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.068118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.068149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.068410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.068442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.068589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.068621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.068809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.068842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.068945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.068978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.069214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.069246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.069380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.069412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.069644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.069677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.069788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.069820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.070060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.070093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.070269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.070301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.070501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.070533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.070711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.070743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.070914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.070946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.231 qpair failed and we were unable to recover it. 00:36:12.231 [2024-12-13 05:52:12.071121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.231 [2024-12-13 05:52:12.071154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.071413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.071444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.071647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.071680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.071868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.071900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.072099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.072129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.072370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.072401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.072639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.072673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.072880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.072912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.073052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.073083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.073318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.073350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.073551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.073583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.073777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.073808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.074046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.074078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.074252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.074284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.074474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.074506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.074680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.074718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.074833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.074865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.075065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.075097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.075284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.075315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.075550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.075583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.075845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.075877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.075998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.076030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.076268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.076300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.076483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.076516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.076633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.076665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.076855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.076887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.077063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.077094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.077211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.077243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.077510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.077542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.077811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.077843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.078104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.078137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.078262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.078294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.078531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.078565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.078752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.078784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.078964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.078996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.079114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.079146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.079265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.079296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.079411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.079442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.079628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.079661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.232 qpair failed and we were unable to recover it. 00:36:12.232 [2024-12-13 05:52:12.079897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.232 [2024-12-13 05:52:12.079928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.080110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.080142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.080265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.080298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.080485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.080518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.080690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.080721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.080993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.081025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.081200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.081232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.081361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.081393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.081594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.081628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.081810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.081842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.082017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.082049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.082218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.082250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.082506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.082539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.082721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.082753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.082927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.082959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.083199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.083230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.083478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.083521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.083739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.083773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.083905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.083937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.084049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.084080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.084201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.084232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.084473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.084506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.084627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.084662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.084905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.084937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.085216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.085249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.085371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.085404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.085634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.085668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.085873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.085907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.086063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.086094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.086261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.086293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.086490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.086525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.086699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.086730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.086854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.086887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.087073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.087105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.087420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.087459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.087589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.087621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.087861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.087895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.088089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.088121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.088372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.088405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.233 [2024-12-13 05:52:12.088597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.233 [2024-12-13 05:52:12.088631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.233 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.088902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.088934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.089116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.089148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.089358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.089390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.089588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.089627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.089815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.089849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.089971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.090001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.090117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.090148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.090412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.090444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.090734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.090767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.090950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.090982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.091168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.091200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.091473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.091506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.091649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.091680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.091989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.092022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.092124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.092157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.092348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.092380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.092499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.092532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.092811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.092844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.092958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.092990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.093128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.093161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.093286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.093319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.093496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.093530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.093665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.093696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.093802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.093833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.094008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.094040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.094246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.094278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.094490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.094523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.094657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.094689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.094864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.094895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.095006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.095039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.095249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.095282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.095492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.095525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.095716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.095749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.234 [2024-12-13 05:52:12.095992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.234 [2024-12-13 05:52:12.096023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.234 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.096159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.096192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.096309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.096342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.096470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.096503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.096708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.096741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.096863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.096894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.097014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.097045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.097220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.097253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.097493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.097526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.097734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.097766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.097974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.098012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.098123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.098156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.098362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.098394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.098519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.098554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.098727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.098759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.098952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.098984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.099172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.099205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.099338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.099370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.099543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.099578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.099754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.099785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.099909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.099942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.100125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.100157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.100331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.100363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.100479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.100516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.100693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.100724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.100983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.101016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.101131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.101163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.101370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.101403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.101619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.101653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.101777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.101809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.101941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.101973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.102147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.102178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.102308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.102340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.102513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.102552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.102729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.102762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.102931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.102963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.103065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.103097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.103274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.103306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.103413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.103445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.235 [2024-12-13 05:52:12.103698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.235 [2024-12-13 05:52:12.103730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.235 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.103911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.103943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.104146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.104178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.104348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.104380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.104599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.104635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.104750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.104782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.104972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.105004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.105194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.105226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.105348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.105380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.105557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.105591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.105698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.105731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.105908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.105945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.106057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.106089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.106206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.106238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.106371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.106403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.106601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.106634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.106828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.106860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.107030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.107062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.107178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.107209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.107399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.107431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.107650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.107682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.107797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.107829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.107996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.108028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.108160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.108192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.108369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.108400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.108603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.108636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.108760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.108793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.108912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.108943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.109072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.109103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.109229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.109261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.109458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.109491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.109670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.109702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.109915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.109948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.110129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.110161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.110340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.110372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.110563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.110597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.110776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.110808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.110937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.110970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.111172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.111204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.111322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.236 [2024-12-13 05:52:12.111354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.236 qpair failed and we were unable to recover it. 00:36:12.236 [2024-12-13 05:52:12.111619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.111652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.111822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.111854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.112157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.112188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.112321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.112353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.112488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.112520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.112720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.112752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.112948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.112980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.113173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.113204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.113390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.113421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.113623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.113656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.113795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.113827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.113951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.113988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.114184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.114216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.114414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.114446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.114643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.114675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.114856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.114888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.115017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.115049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.115162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.115195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.115308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.115340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.115473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.115507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.115628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.115660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.115763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.115794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.115896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.115928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.116060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.116092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.116211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.116242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.116432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.116473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.116589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.116621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.116791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.116822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.117006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.117038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.117147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.117179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.117281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.117313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.117482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.117515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.117684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.117716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.117837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.117867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.118006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.118038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.118281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.118312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.118506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.118538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.118664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.118695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.118818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.237 [2024-12-13 05:52:12.118850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.237 qpair failed and we were unable to recover it. 00:36:12.237 [2024-12-13 05:52:12.119027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.119059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.119240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.119271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.119471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.119505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.119636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.119668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.119786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.119818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.120054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.120086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.120288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.120320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.120498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.120530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.120726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.120757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.120942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.120974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.121113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.121145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.121330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.121363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.121478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.121517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.121724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.121755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.121933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.121965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.122076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.122108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.122280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.122311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.122415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.122446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.122639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.122672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.122850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.122881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.123017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.123050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.123174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.123206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.123308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.123340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.123439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.123481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.123673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.123704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.123808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.123839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.123964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.123997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.124125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.124156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.124276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.124309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.124484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.124521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.124622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.124654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.124768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.124800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.124976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.125008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.125196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.125227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.125341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.125373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.238 qpair failed and we were unable to recover it. 00:36:12.238 [2024-12-13 05:52:12.125495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.238 [2024-12-13 05:52:12.125529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.125767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.125799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.125922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.125954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.126054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.126085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.126275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.126307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.126489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.126522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.126771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.126803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.126905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.126940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.127144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.127176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.127300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.127333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.127438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.127480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.127656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.127688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.127792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.127823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.127929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.127961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.128090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.128122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.128290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.128322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.128505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.128538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.128744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.128781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.128893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.128924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.129133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.129165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.129342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.129374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.129641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.129674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.129802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.129833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.129953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.129985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.130108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.130140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.130272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.130304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.130484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.130517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.130690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.130721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.130853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.130885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.131097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.131130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.131254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.131285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.131413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.131445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.131643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.131675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.131784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.131815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.131931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.131963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.132066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.132098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.132231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.132263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.132382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.132413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.132528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.132561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.239 qpair failed and we were unable to recover it. 00:36:12.239 [2024-12-13 05:52:12.132730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.239 [2024-12-13 05:52:12.132761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.132865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.132896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.133031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.133064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.133192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.133224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.133395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.133427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.133633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.133665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.133777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.133809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.133921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.133952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.134132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.134164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.134296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.134327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.134446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.134502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.134612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.134645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.134767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.134799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.134986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.135018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.135129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.135161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.135276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.135307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.135549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.135582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.135701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.135733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.135852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.135890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.135995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.136026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.136206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.136239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.136479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.136512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.136626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.136658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.136770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.136801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.136967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.136999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.137180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.137211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.137384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.137416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.137550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.137583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.137710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.137742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.137862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.137894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.137996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.138028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.138221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.138253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.138376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.138409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.138530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.138562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.138737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.138769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.138875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.138906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.139020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.139052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.139237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.139269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.139390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.139423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.240 qpair failed and we were unable to recover it. 00:36:12.240 [2024-12-13 05:52:12.139628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.240 [2024-12-13 05:52:12.139660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.139835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.139868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.139992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.140024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.140150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.140181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.140362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.140394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.140523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.140558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.140764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.140796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.140993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.141024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.141196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.141227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.141350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.141381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.141505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.141538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.141661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.141694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.141825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.141857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.141983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.142015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.142201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.142234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.142345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.142376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.142491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.142523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.142630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.142663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.142783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.142815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.143010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.143047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.143163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.143194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.143317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.143349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.143629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.143662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.143837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.143869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.144000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.144032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.144289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.144321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.144506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.144538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.144672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.144704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.144894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.144926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.145111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.145143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.145329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.145361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.145480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.145514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.145636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.145667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.145866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.145898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.146023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.146055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.146161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.146193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.146363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.146395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.146512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.146545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.146659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.146691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.241 [2024-12-13 05:52:12.146878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.241 [2024-12-13 05:52:12.146910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.241 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.147023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.147055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.147278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.147310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.147429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.147491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.147672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.147704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.147820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.147852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.148026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.148058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.148197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.148229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.148436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.148479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.148586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.148619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.148724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.148756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.148867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.148899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.149014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.149046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.149234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.149266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.149458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.149491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.149679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.149711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.149828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.149860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.150065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.150098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.150219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.150250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.150349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.150381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.150509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.150571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.150739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.150770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.150890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.150922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.151104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.151136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.151288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.151319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.151517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.151550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.151731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.151763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.151963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.151994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.152098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.152130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.152246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.152279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.152487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.152522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.152700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.152731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.152912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.152944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.153160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.153192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.153319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.153351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.242 [2024-12-13 05:52:12.153494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.242 [2024-12-13 05:52:12.153527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.242 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.153703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.153734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.153857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.153890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.154100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.154131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.154244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.154276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.154400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.154432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.154628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.154660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.154768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.154799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.155033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.155065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.155303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.155337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.155460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.155493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.155684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.155716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.155946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.156018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.156147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.156182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.156315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.156348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.156540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.156574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.156684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.156716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.156915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.156947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.157183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.157215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.157402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.157433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.157566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.157599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.157700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.157732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.157908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.157939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.158074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.158106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.158288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.158321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.158458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.158501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.158636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.158669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.158783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.158814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.159007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.159037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.159168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.159198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.159331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.159362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.159491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.159523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.159634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.159665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.159834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.159867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.160041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.160073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.160179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.160211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.160327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.160359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.160488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.160521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.160648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.160680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.243 qpair failed and we were unable to recover it. 00:36:12.243 [2024-12-13 05:52:12.160809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.243 [2024-12-13 05:52:12.160840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.161023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.161055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.161227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.161259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.161432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.161473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.161589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.161622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.161745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.161778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.161891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.161923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.162112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.162144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.162252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.162284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.162406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.162438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.162626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.162658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.162777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.162810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.162978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.163010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.163167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.163238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.163442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.163495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.163624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.163658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.163797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.163828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.164130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.164162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.164271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.164303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.164540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.164576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.164698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.164728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.164900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.164931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.165170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.165205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.165335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.165365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.165593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.165626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.165864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.165897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.166016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.166057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.166189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.166221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.166338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.166369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.166593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.166626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.166807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.166839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.167025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.167057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.167175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.167207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.167306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.167337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.167516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.167550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.167810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.167843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.168023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.168054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.168185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.168218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.168388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.168421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.244 [2024-12-13 05:52:12.168566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.244 [2024-12-13 05:52:12.168598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.244 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.168721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.168753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.168862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.168895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.169026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.169058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.169257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.169290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.169400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.169433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.169624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.169657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.169778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.169810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.169939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.169972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.170088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.170120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.170316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.170348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.170588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.170622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.170730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.170762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.171040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.171072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.171190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.171223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.171339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.171371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.171551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.171584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.171832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.171864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.171997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.172029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.172137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.172169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.172351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.172383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.172502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.172550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.172772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.172804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.172904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.172936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.173053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.173085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.173207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.173239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.173351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.173383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.173513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.173552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.173768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.173801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.173974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.174006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.174138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.174171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.174341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.174373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.174545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.174579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.174777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.174810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.174929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.174960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.175154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.175187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.175295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.175327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.175601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.175634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.175764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.175796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.245 [2024-12-13 05:52:12.175967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.245 [2024-12-13 05:52:12.175999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.245 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.176235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.176267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.176379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.176412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.176609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.176642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.176769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.176801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.176917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.176948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.177069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.177101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.177232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.177264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.177436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.177476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.177737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.177770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.177891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.177923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.178053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.178085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.178272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.178304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.178409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.178441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.178646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.178678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.178926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.178995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.179131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.179166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.179300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.179333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.179521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.179554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.179745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.179777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.179906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.179937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.180042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.180073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.180259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.180292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.180478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.180510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.180700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.180733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.180916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.180948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.181054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.181086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.181275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.181306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.181420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.181486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.181689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.181721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.181843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.181874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.182126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.182157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.182261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.182292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.182496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.182529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.182648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.182681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.182918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.182950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.183127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.183159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.183291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.183322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.183502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.183536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.246 [2024-12-13 05:52:12.183670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.246 [2024-12-13 05:52:12.183702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.246 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.183820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.183852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.183965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.183996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.184107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.184139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.184343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.184374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.184490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.184523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.184644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.184676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.184795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.184827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.185010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.185042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.185234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.185266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.185466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.185499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.185675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.185707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.185917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.185950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.186134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.186165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.186359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.186392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.186533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.186567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.186685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.186722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.186850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.186881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.186985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.187017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.187257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.187289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.187399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.187431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.187557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.187590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.187697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.187729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.187829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.187861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.188052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.188083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.188193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.188225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.188348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.188379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.188484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.188517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.188646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.188677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.188799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.188832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.189032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.189065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.189246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.189278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.189457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.189489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.189606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.189638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.189753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.189784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.189892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.247 [2024-12-13 05:52:12.189925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.247 qpair failed and we were unable to recover it. 00:36:12.247 [2024-12-13 05:52:12.190101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.190133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.190301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.190333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.190471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.190505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.190627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.190666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.190801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.190848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.191119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.191162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.191308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.191340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.191533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.191567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.191671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.191706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.191899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.191930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.192040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.192071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.192176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.192208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.192321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.192353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.192479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.192513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.192639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.192671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.192781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.192813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.192943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.192991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.193130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.193174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.193351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.193384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.193565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.193598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.193770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.193810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.193922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.193954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.194066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.194099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.194203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.194234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.194413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.194446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.194658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.194690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.194795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.194828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.248 [2024-12-13 05:52:12.195016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.248 [2024-12-13 05:52:12.195063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.248 qpair failed and we were unable to recover it. 00:36:12.533 [2024-12-13 05:52:12.195279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.533 [2024-12-13 05:52:12.195328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.533 qpair failed and we were unable to recover it. 00:36:12.533 [2024-12-13 05:52:12.195475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.533 [2024-12-13 05:52:12.195521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.533 qpair failed and we were unable to recover it. 00:36:12.533 [2024-12-13 05:52:12.195738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.533 [2024-12-13 05:52:12.195780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.533 qpair failed and we were unable to recover it. 00:36:12.533 [2024-12-13 05:52:12.195920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.533 [2024-12-13 05:52:12.195964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.533 qpair failed and we were unable to recover it. 00:36:12.533 [2024-12-13 05:52:12.196136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.533 [2024-12-13 05:52:12.196190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.533 qpair failed and we were unable to recover it. 00:36:12.533 [2024-12-13 05:52:12.196433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.533 [2024-12-13 05:52:12.196498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.533 qpair failed and we were unable to recover it. 00:36:12.533 [2024-12-13 05:52:12.196725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.533 [2024-12-13 05:52:12.196771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.533 qpair failed and we were unable to recover it. 00:36:12.533 [2024-12-13 05:52:12.197011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.533 [2024-12-13 05:52:12.197056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.533 qpair failed and we were unable to recover it. 00:36:12.533 [2024-12-13 05:52:12.197284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.533 [2024-12-13 05:52:12.197330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.533 qpair failed and we were unable to recover it. 00:36:12.533 [2024-12-13 05:52:12.197473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.533 [2024-12-13 05:52:12.197518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.533 qpair failed and we were unable to recover it. 00:36:12.533 [2024-12-13 05:52:12.197681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.533 [2024-12-13 05:52:12.197724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.533 qpair failed and we were unable to recover it. 00:36:12.533 [2024-12-13 05:52:12.197855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.533 [2024-12-13 05:52:12.197898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.533 qpair failed and we were unable to recover it. 00:36:12.533 [2024-12-13 05:52:12.198100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.533 [2024-12-13 05:52:12.198150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.533 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.198396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.198440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.198604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.198653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.198801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.198837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.198953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.198984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.199122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.199157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.199281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.199313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.199501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.199546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.199755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.199794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.200017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.200060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.200198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.200232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.200353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.200396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.200606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.200639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.200849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.200893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.201017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.201049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.201170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.201202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.201395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.201429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.201572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.201605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.201806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.201840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.202024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.202056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.202189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.202231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.202345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.202390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.202608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.202656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.202778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.202810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.203005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.203040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.203168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.203200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.203373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.203419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.203588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.203626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.203758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.203789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.203912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.203943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.204130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.204165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.204352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.204382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.204577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.204610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.204852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.204883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.205079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.205111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.205296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.205327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.205520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.205553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.205732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.205763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.534 qpair failed and we were unable to recover it. 00:36:12.534 [2024-12-13 05:52:12.205880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.534 [2024-12-13 05:52:12.205913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.206125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.206157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.206356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.206388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.206569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.206601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.206723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.206755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.206995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.207026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.207199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.207231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.207346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.207376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.207483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.207515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.207786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.207818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.207927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.207958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.208070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.208102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.208273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.208305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.208436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.208479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.208718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.208749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.208874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.208905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.209010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.209041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.209224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.209254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.209375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.209408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.209537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.209569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.209681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.209712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.209888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.209919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.210039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.210077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.210192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.210223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.210394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.210426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.210572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.210604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.210797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.210828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.210937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.210968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.211209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.211240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.211488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.211522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.211713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.211744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.211861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.211893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.212017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.212048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.212185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.212217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.212402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.212432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.212631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.212663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.212843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.535 [2024-12-13 05:52:12.212875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.535 qpair failed and we were unable to recover it. 00:36:12.535 [2024-12-13 05:52:12.212986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.213017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.213202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.213233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.213349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.213381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.213556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.213588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.213697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.213729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.213842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.213873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.213995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.214026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.214218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.214250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.214377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.214409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.214586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.214619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.214727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.214758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.214877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.214909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.215018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.215049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.215160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.215192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.215418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.215460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.215577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.215608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.215711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.215742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.215873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.215904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.216010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.216042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.216233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.216264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.216376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.216408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.216553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.216585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.216758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.216789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.216893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.216924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.217047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.217078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.217188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.217226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.217425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.217466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.217709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.217740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.217923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.217954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.218195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.218226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.218364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.218395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.218544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.218576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.218766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.218797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.218916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.218948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.219058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.219088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.219191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.219223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.219396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.219426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.219548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.219580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.219697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.536 [2024-12-13 05:52:12.219728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.536 qpair failed and we were unable to recover it. 00:36:12.536 [2024-12-13 05:52:12.219918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.219949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.220123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.220155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.220259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.220289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.220476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.220510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.220669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.220700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.220889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.220920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.221102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.221133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.221312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.221344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.221459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.221490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.221663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.221694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.221879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.221910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.222100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.222131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.222236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.222266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.222387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.222419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.222636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.222708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.222855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.222892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.223092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.223125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.223373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.223405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.223542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.223576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.223819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.223851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.224034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.224067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.224245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.224277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.224403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.224435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.224621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.224654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.224772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.224803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.225081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.225113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.225228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.225260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.225395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.225428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.225688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.225721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.225830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.225864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.225985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.226017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.226126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.226159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.226278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.226311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.226552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.226586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.226703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.226735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.226843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.226875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.537 [2024-12-13 05:52:12.226996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.537 [2024-12-13 05:52:12.227029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.537 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.227138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.227171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.227355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.227388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.227572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.227606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.227827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.227898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.228113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.228147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.228276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.228308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.228415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.228446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.228665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.228697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.228840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.228872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.228992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.229022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.229147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.229178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.229294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.229325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.229507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.229540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.229648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.229679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.229798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.229829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.229957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.229987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.230085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.230123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.230246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.230277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.230498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.230531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.230634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.230666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.230865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.230895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.231135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.231166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.231288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.231319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.231440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.231478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.231677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.231708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.231907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.231938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.232048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.232079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.232197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.232228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.232405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.232435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.232565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.232597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.232838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.538 [2024-12-13 05:52:12.232869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.538 qpair failed and we were unable to recover it. 00:36:12.538 [2024-12-13 05:52:12.233001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.233032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.233212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.233243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.233433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.233477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.233666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.233697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.233884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.233916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.234090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.234121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.234235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.234266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.234471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.234504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.234690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.234723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.234839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.234868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.234992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.235024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.235201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.235232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.235472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.235541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.235684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.235720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.235904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.235936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.236050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.236082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.236271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.236302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.236419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.236464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.236588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.236620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.236750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.236782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.236894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.236925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.237030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.237062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.237240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.237272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.237445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.237489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.237607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.237639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.237905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.237947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.238066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.238097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.238300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.238332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.238470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.238504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.238641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.238672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.238776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.238808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.238977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.239009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.239249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.239280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.239403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.239435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.239575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.239609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.239812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.239844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.240080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.240112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.240289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.240321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.240601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.539 [2024-12-13 05:52:12.240634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.539 qpair failed and we were unable to recover it. 00:36:12.539 [2024-12-13 05:52:12.240905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.240937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.241108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.241140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.241281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.241313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.241426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.241469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.241655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.241689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.241823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.241854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.241961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.241993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.242104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.242137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.242392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.242424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.242607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.242639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.242875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.242907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.243030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.243061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.243177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.243210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.243322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.243355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.243563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.243596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.243715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.243747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.243853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.243885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.243984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.244016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.244200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.244232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.244350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.244381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.244493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.244526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.244710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.244742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.244980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.245011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.245183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.245215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.245328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.245359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.245480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.245513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.245703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.245741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.245980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.246011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.246132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.246164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.246281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.246313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.246428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.246475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.246589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.246620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.246817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.246848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.246964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.246996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.247169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.247200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.247403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.247435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.247638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.247670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.247800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.540 [2024-12-13 05:52:12.247832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.540 qpair failed and we were unable to recover it. 00:36:12.540 [2024-12-13 05:52:12.248029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.248060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.248232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.248264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.248443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.248488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.248611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.248643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.248812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.248845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.249082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.249114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.249296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.249327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.249437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.249492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.249697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.249728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.249856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.249888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.249999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.250031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.250155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.250187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.250355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.250387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.250514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.250547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.250757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.250790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.250962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.251034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.251182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.251216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.251337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.251369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.251488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.251521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.251634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.251665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.251780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.251812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.251919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.251953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.252079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.252110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.252301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.252334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.252524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.252559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.252750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.252782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.252963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.252995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.253173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.253206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.253327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.253360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.253569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.253605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.253850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.253883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.254003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.254035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.254149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.254181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.254363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.254397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.254530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.254562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.254757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.254790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.254895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.254928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.255108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.255139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.255271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.255302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.541 qpair failed and we were unable to recover it. 00:36:12.541 [2024-12-13 05:52:12.255487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.541 [2024-12-13 05:52:12.255528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.255655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.255688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.255802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.255833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.255950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.255989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.256177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.256209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.256327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.256358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.256487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.256521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.256722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.256755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.256880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.256911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.257098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.257129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.257252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.257290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.257503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.257537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.257728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.257760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.257939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.257970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.258156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.258190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.258399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.258431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.258572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.258605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.258823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.258856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.259041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.259074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.259194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.259225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.259332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.259364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.259549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.259583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.259700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.259731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.259850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.259883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.260073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.260105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.260216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.260247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.260376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.260408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.260530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.260562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.260682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.260714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.260828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.260861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.542 [2024-12-13 05:52:12.261059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.542 [2024-12-13 05:52:12.261097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.542 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.261283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.261315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.261494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.261528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.261654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.261686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.261862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.261895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.262074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.262105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.262208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.262239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.262367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.262406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.262605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.262638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.262758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.262789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.262913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.262947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.263143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.263179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.263311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.263345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.263464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.263497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.263693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.263736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.263978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.264012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.264203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.264239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.264363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.264394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.264578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.264610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.264735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.264768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.264942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.264978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.265176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.265209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.265476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.265511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.265692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.265725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.265911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.265947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.266075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.266107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.266219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.266252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.266370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.266402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.266617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.266650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.266842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.266874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.266976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.267008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.267266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.267300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.267403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.267434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.267591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.267624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.267885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.267917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.268057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.268090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.268200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.268231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.268347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.268379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.268552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.268586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.268769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.543 [2024-12-13 05:52:12.268803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.543 qpair failed and we were unable to recover it. 00:36:12.543 [2024-12-13 05:52:12.268988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.269019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.269202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.269237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.269421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.269466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.269708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.269741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.269928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.269960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.270094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.270127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.270320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.270352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.270478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.270515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.270756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.270789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.270981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.271013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.271224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.271258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.271364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.271395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.271576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.271610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.271861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.271895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.272074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.272106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.272349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.272382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.272521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.272554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.272678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.272710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.272842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.272873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.273080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.273112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.273234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.273265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.273385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.273418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.273558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.273592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.273829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.273861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.273982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.274015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.274125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.274156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.274323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.274356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.274539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.274573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.274709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.274758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.274877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.274909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.275092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.275123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.275229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.275262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.275380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.275412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.275632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.275665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.275784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.275817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.275984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.276017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.276198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.544 [2024-12-13 05:52:12.276230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.544 qpair failed and we were unable to recover it. 00:36:12.544 [2024-12-13 05:52:12.276334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.276366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.276487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.276534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.276649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.276685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.276813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.276845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.277037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.277068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.277178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.277214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.277461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.277495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.277746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.277778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.277960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.277993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.278117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.278149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.278252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.278282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.278461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.278495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.278625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.278659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.278787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.278818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.278938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.278974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.279079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.279111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.279215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.279247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.279485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.279520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.279703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.279742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.279932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.279965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.280096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.280127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.280321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.280353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.280465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.280499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.280627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.280658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.280865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.280896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.281069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.281108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.281297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.281332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.281489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.281524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.281766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.281804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.281994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.282029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.282162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.282194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.282313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.282344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.282469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.282503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.282783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.282819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.282951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.282982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.283102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.283133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.283251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.283282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.283400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.283432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.545 [2024-12-13 05:52:12.283594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.545 [2024-12-13 05:52:12.283631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.545 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.283827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.283861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.284042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.284074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.284240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.284272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.284396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.284428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.284640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.284675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.284791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.284830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.284955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.285018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.285128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.285161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.285267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.285298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.285408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.285440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.285651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.285687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.285816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.285849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.285983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.286015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.286119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.286152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.286337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.286369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.286505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.286540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.286654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.286687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.286802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.286835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.287012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.287043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.287168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.287200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.287472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.287542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.287697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.287732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.287930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.287962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.288071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.288104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.288224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.288257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.288374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.288405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.288604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.288637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.288748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.288779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.289022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.289053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.289151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.289182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.289289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.289321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.289463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.289495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.289618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.289650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.289768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.289808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.289922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.289955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.290060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.290091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.290269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.290300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.290425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.290467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.290599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.546 [2024-12-13 05:52:12.290631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.546 qpair failed and we were unable to recover it. 00:36:12.546 [2024-12-13 05:52:12.290737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.290769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.290869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.290900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.291099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.291131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.291270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.291301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.291408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.291439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.291632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.291666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.291844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.291875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.292054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.292085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.292229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.292262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.292369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.292403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.292532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.292565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.292806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.292839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.292943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.292975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.293095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.293127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.293313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.293345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.293462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.293494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.293739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.293770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.293878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.293911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.294082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.294113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.294299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.294331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.294468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.294502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.294675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.294746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.294899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.294936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.295048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.295081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.295257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.295289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.295467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.295501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.295631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.295663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.295781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.295813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.295919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.295951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.296072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.296104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.547 [2024-12-13 05:52:12.296210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.547 [2024-12-13 05:52:12.296242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.547 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.296496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.296530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.296708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.296740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.296912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.296945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.297135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.297177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.297293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.297325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.297509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.297543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.297728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.297760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.297968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.298000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.298116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.298148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.298257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.298289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.298499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.298533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.298649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.298681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.298871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.298905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.299091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.299124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.299245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.299278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.299409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.299474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.299584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.299616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.299751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.299783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.299970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.300002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.300169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.300201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.300325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.300357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.300495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.300530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.300739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.300771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.300877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.300910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.301103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.301136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.301321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.301354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.301480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.301514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.301758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.301790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.301915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.301947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.302128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.302160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.302280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.302312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.302432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.302477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.302665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.302697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.302951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.302983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.303188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.548 [2024-12-13 05:52:12.303220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.548 qpair failed and we were unable to recover it. 00:36:12.548 [2024-12-13 05:52:12.303354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.303387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.303654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.303687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.303872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.303903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.304093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.304126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.304318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.304349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.304469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.304502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.304629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.304662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.304837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.304869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.304988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.305027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.305203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.305236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.305353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.305385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.305495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.305528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.305703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.305736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.305982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.306014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.306132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.306164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.306269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.306300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.306536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.306570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.306761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.306792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.306981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.307013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.307129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.307162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.307297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.307329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.307495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.307529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.307803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.307836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.308023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.308057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.308307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.308339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.308467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.308500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.308671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.308704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.308899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.308931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.309120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.309152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.309326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.309359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.309481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.309514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.309645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.309677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.309801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.309833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.310022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.310054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.310185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.310217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.310406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.310439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.310555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.310587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.310771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.549 [2024-12-13 05:52:12.310803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.549 qpair failed and we were unable to recover it. 00:36:12.549 [2024-12-13 05:52:12.311055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.311088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.311308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.311340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.311472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.311505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.311676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.311709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.311886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.311918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.312110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.312142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.312325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.312357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.312546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.312579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.312844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.312876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.313058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.313090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.313281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.313319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.313432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.313475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.313610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.313643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.313760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.313792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.314036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.314069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.314254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.314286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.314390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.314422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.314554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.314586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.314757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.314788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.314901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.314933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.315040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.315073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.315244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.315276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.315461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.315494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.315605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.315638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.315830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.315862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.316035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.316067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.316170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.316202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.316308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.316340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.316465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.316498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.316678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.316710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.316816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.316848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.316951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.316983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.317226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.317258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.317430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.317473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.317602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.317634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.317822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.317855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.317973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.318006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.318120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.318152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.318364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.550 [2024-12-13 05:52:12.318396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.550 qpair failed and we were unable to recover it. 00:36:12.550 [2024-12-13 05:52:12.318521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.318555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.318671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.318704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.318889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.318920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.319024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.319056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.319226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.319259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.319362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.319394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.319652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.319686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.319946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.319978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.320145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.320177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.320364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.320396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.320563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.320597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.320708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.320746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.320855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.320887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.321104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.321136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.321306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.321337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.321527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.321561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.321738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.321770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.321891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.321923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.322047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.322079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.322205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.322238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.322357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.322389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.322520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.322553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.322681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.322713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.322842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.322875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.322999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.323031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.323232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.323265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.323379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.323412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.323605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.323637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.323753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.323785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.323904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.323937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.324108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.324140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.324378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.324410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.324637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.324671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.324899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.324931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.325054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.325087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.325266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.325298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.551 [2024-12-13 05:52:12.325480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.551 [2024-12-13 05:52:12.325514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.551 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.325642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.325674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.325794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.325827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.325945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.325977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.326080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.326112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.326303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.326336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.326442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.326483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.326677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.326709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.326827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.326859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.327048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.327080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.327187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.327219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.327340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.327372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.327493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.327527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.327704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.327736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.327917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.327949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.328087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.328119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.328307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.328340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.328446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.328507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.328621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.328654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.328791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.328823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.328935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.328967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.329140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.329171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.329408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.329440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.329643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.329676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.329783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.329815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.329986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.330018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.330128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.330160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.330285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.330317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.330442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.330486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.330617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.330649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.330889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.330924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.331101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.552 [2024-12-13 05:52:12.331133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.552 qpair failed and we were unable to recover it. 00:36:12.552 [2024-12-13 05:52:12.331260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.331292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.331471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.331504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.331716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.331748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.332010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.332042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.332155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.332187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.332318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.332350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.332477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.332511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.332681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.332714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.332891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.332923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.333036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.333068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.333187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.333225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.333401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.333433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.333562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.333595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.333708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.333741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.333941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.333972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.334093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.334125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.334299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.334332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.334460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.334493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.334597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.334629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.334754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.334786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.334912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.334944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.335122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.335154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.335332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.335364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.335476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.335510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.335625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.335657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.335884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.335916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.336037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.336069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.336173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.336205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.336326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.336358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.336599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.336632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.336763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.336796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.337002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.337035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.337231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.337263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.337438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.337481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.337613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.337646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.337839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.337871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.338075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.338108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.338241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.338273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.338459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.553 [2024-12-13 05:52:12.338492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.553 qpair failed and we were unable to recover it. 00:36:12.553 [2024-12-13 05:52:12.338603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.338635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.338747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.338780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.338970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.339003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.339125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.339156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.339259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.339291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.339406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.339438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.339622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.339656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.339761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.339794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.339895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.339927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.340135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.340167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.340291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.340324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.340566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.340605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.340713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.340745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.340928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.340960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.341088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.341121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.341298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.341330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.341434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.341496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.341739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.341772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.341885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.341917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.342020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.342052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.342251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.342283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.342383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.342415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.342600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.342633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.342757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.342789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.342913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.342946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.343072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.343104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.343243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.343276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.343406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.343438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.343714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.343746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.343854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.343886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.343998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.344030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.344202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.344235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.344410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.344442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.344642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.344674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.344815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.344847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.345021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.345052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.345178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.345208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.345338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.345368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.554 [2024-12-13 05:52:12.345504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.554 [2024-12-13 05:52:12.345536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.554 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.345784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.345814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.345929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.345960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.346082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.346112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.346294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.346326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.346430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.346470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.346650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.346680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.346865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.346896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.347017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.347047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.347284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.347314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.347421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.347457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.347648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.347679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.347881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.347911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.348120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.348157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.348346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.348375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.348490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.348521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.348691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.348722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.348853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.348881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.348988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.349018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.349252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.349283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.349408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.349438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.349622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.349653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.349845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.349876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.349983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.350013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.350270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.350300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.350501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.350535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.350650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.350682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.350865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.350897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.351084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.351116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.351250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.351281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.351409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.351440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.351651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.351683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.351790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.351821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.351923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.351955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.352140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.352171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.352275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.352307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.352495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.352528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.352651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.352684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.352874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.352905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.555 [2024-12-13 05:52:12.353041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.555 [2024-12-13 05:52:12.353073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.555 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.353257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.353290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.353492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.353524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.353698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.353731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.353854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.353887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.354012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.354044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.354174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.354205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.354473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.354507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.354775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.354806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.354939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.354970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.355160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.355192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.355303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.355335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.355508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.355541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.355731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.355762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.355877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.355915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.356021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.356052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.356234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.356265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.356377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.356409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.356676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.356710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.356898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.356929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.357116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.357147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.357392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.357424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.357555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.357587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.357719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.357751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.357932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.357963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.358098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.358129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.358308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.358339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.358468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.358502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.358615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.358648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.358779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.358810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.358977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.359008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.359180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.359211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.359476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.556 [2024-12-13 05:52:12.359509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.556 qpair failed and we were unable to recover it. 00:36:12.556 [2024-12-13 05:52:12.359697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.359728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.359846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.359877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.359976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.360009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.360136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.360167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.360293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.360324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.360563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.360598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.360795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.360827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.361015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.361046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.361173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.361205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.361324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.361356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.361533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.361565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.361699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.361731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.361834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.361866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.361972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.362003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.362135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.362167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.362351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.362383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.362567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.362603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.362716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.362750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.362867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.362899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.363006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.363038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.363140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.363172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.363274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.363312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.363420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.363460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.363646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.363678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.363789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.363820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.363999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.364031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.364175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.364206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.364343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.364375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.364562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.364595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.364771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.364804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.364933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.364966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.365139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.365172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.365349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.365381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.365518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.365551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.365739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.365771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.365974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.366007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.366122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.366153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.366345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.366376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.557 [2024-12-13 05:52:12.366490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.557 [2024-12-13 05:52:12.366523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.557 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.366645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.366676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.366859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.366891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.367011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.367043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.367170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.367202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.367322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.367353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.367467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.367500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.367614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.367645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.367816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.367847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.367964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.367997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.368175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.368206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.368327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.368359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.368475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.368508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.368620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.368652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.368831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.368863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.369035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.369067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.369177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.369209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.369308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.369340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.369532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.369565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.369687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.369719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.369897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.369928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.370050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.370082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.370192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.370223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.370330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.370367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.370494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.370527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.370653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.370686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.370903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.370935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.371172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.371204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.371331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.371363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.371478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.371512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.371646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.371678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.371862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.371895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.372067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.372098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.372206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.372237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.372356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.372389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.372632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.372665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.372793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.558 [2024-12-13 05:52:12.372824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.558 qpair failed and we were unable to recover it. 00:36:12.558 [2024-12-13 05:52:12.372951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.372983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.373164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.373195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.373379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.373411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.373603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.373636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.373742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.373774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.373892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.373923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.374043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.374075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.374261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.374292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.374406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.374438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.374641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.374674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.374782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.374813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.374943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.374975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.375101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.375133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.375330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.375362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.375494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.375528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.375665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.375698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.375828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.375859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.375981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.376013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.376116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.376151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.376277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.376308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.376520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.376553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.376722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.376754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.376877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.376911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.377037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.377070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.377183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.377217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.377397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.377428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.377553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.377591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.377777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.377809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.377988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.378021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.378143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.378175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.378295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.378326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.378456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.378489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.378672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.378705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.378806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.378837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.379033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.379065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.379185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.379218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.559 qpair failed and we were unable to recover it. 00:36:12.559 [2024-12-13 05:52:12.379394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.559 [2024-12-13 05:52:12.379426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.379569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.379606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.379725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.379755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.379870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.379900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.380020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.380051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.380158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.380189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.380314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.380346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.380475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.380521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.380648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.380681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.380784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.380816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.380923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.380954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.381134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.381166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.381280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.381312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.381432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.381470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.381597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.381629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.381754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.381785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.381887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.381919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.382090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.382160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.382296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.382333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.382480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.382516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.382637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.382669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.382795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.382827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.383019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.383052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.383165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.383198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.383308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.383341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.383465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.383498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.383615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.383646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.383887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.383919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.384094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.384126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.384230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.384263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.384379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.384435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.384569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.384602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.384727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.384759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.384950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.384985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.385105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.385137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.385342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.385374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.385487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.385523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.385641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.385673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.385775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.385807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.560 qpair failed and we were unable to recover it. 00:36:12.560 [2024-12-13 05:52:12.386005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.560 [2024-12-13 05:52:12.386036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.386152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.386183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.386299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.386330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.386461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.386494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.386620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.386652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.386766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.386798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.386902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.386934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.387038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.387069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.387262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.387294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.387402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.387434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.387620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.387652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.387758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.387790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.387972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.388004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.388123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.388154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.388338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.388370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.388476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.388510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.388681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.388712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.388889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.388921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.389092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.389168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.389365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.389435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.389570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.389606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.389714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.389746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.389890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.389922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.390027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.390059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.390178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.390210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.390381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.390413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.390551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.390584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.390764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.390796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.390911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.390943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.391068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.391099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.391207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.391238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.561 qpair failed and we were unable to recover it. 00:36:12.561 [2024-12-13 05:52:12.391353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.561 [2024-12-13 05:52:12.391390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.391578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.391611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.391726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.391758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.391859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.391890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.392062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.392093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.392215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.392247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.392425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.392466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.392583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.392615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.392803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.392833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.393006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.393038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.393144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.393176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.393283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.393314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.393422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.393468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.393664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.393696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.393828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.393860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.394056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.394087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.394216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.394248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.394354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.394386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.394571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.394605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.394773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.394805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.394913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.394945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.395064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.395096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.395211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.395244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.395369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.395400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.395533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.395566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.395678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.395710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.395891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.395922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.396073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.396145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.396288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.396330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.396458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.396492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.396731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.396764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.396897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.396929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.397057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.397089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.397204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.397236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.562 [2024-12-13 05:52:12.397351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.562 [2024-12-13 05:52:12.397383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.562 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.397562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.397595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.397777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.397808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.397926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.397958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.398124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.398156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.398268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.398300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.398412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.398465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.398599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.398631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.398827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.398858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.399033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.399065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.399183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.399215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.399344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.399374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.399494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.399527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.399635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.399667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.399776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.399808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.399937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.399969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.400102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.400134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.400313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.400345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.400473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.400505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.400611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.400644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.400832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.400864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.400965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.400996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.401156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.401189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.401370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.401402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.401584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.401616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.401721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.401753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.401854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.401885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.402008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.402039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.402164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.402196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.402315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.402347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.402466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.402497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.402599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.402631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.402811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.402843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.402982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.403024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.403174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.403245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.403377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.403412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.403615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.563 [2024-12-13 05:52:12.403649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-12-13 05:52:12.403759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.403790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.403911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.403943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.404046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.404078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.404235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.404266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.404432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.404474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.404608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.404639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.404763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.404795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.404994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.405026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.405203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.405235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.405435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.405487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.405623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.405655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.405786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.405818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.405926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.405961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.406090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.406122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.406313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.406346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.406483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.406518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.406789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.406820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.407001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.407033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.407205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.407237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.407557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.407591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.407830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.407862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.408090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.408121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.408303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.408335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.408616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.408649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.408783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.408815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.408935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.408966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.409202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.409234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.409415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.409459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.409605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.409637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.409783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.409815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.409953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.409985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.410274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.410306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.410493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.410526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.410647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.410679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.410852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.410884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.411171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.411203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.411333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.411371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.411615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.411648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.411871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.564 [2024-12-13 05:52:12.411902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-12-13 05:52:12.412163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.412194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.412406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.412437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.412607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.412639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.412813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.412845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.412985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.413017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.413164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.413196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.413328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.413359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.413488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.413522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.413640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.413672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.413806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.413837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.413965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.413997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.414186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.414218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.414364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.414396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.414593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.414626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.414742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.414774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.414892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.414924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.415040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.415072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.415196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.415228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.415336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.415369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.415477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.415510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.415633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.415665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.415766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.415798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.415927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.415958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.416067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.416099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.416233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.416265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.416377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.416408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.416533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.416566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.416771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.416803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.416981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.417013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.417190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.417222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.417389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.417421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.417549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.417580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.417787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.417819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.417931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.417962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.418069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.418100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.418209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.418241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.418345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.418377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.418487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.418526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.565 [2024-12-13 05:52:12.418638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.565 [2024-12-13 05:52:12.418669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.418783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.418815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.418920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.418952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.419123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.419154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.419289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.419321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.419422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.419463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.419587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.419618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.419856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.419888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.420018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.420051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.420239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.420270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.420493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.420529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.420837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.420869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.421065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.421099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.421303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.421335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.421579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.421612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.421792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.421825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.422017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.422049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.422332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.422364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.422608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.422641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.422764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.422796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.422978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.423010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.423128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.423160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.423397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.423430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.423560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.423592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.423737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.423769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.423897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.423930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.424066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.424098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.424354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.424386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.424679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.424713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.424855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.424887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.425082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.425115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.425286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.425318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.425492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.566 [2024-12-13 05:52:12.425525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.566 qpair failed and we were unable to recover it. 00:36:12.566 [2024-12-13 05:52:12.425704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.425736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.425932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.425963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.426153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.426186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.426363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.426395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.426514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.426548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.426834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.426866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.427062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.427104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.427316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.427348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.427469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.427502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.427737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.427769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.428010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.428041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.428171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.428203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.428467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.428500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.428633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.428665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.428789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.428821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.428947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.428979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.429179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.429211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.429329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.429362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.429477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.429510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.429792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.429824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.430023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.430055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.430342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.430374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.430571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.430604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.430841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.430873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.431012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.431043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.431163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.431195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.431383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.431415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.431651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.431684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.431893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.431925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.432113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.432145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.432381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.432413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.432601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.432635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.432872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.432905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.433151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.433183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.433444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.433484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.433633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.433666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.433855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.433887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.434095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.434127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.567 [2024-12-13 05:52:12.434386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.567 [2024-12-13 05:52:12.434418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.567 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.434556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.434589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.434827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.434858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.435052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.435084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.435205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.435237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.435441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.435482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.435617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.435649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.435825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.435857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.436056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.436093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.436281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.436312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.436502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.436536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.436664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.436696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.436837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.436868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.437082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.437114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.437350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.437383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.437629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.437662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.437900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.437932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.438166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.438199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.438347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.438379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.438574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.438608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.438738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.438770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.438961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.438994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.439208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.439240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.439427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.439466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.439601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.439633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.439822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.439855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.440051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.440083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.440201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.440233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.440497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.440532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.440726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.440758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.440947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.440979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.441181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.441213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.441403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.441435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.441633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.441666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.441814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.441846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.441972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.442004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.568 qpair failed and we were unable to recover it. 00:36:12.568 [2024-12-13 05:52:12.442128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.568 [2024-12-13 05:52:12.442161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.442392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.442424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.442554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.442587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.442782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.442814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.442943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.442976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.443331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.443364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.443553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.443587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.443777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.443809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.443927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.443959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.444196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.444228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.444401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.444433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.444585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.444617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.444801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.444839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.445040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.445073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.445224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.445256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.445429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.445472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.445663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.445696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.445904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.445936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.446241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.446273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.446398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.446430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.446630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.446664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.446870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.446902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.447039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.447071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.447265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.447297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.447543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.447577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.447727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.447759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.447879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.447911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.448208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.448243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.448540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.448573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.448780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.448814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.448937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.448969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.449108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.449141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.449347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.449380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.449505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.449539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.449787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.449820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.450015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.450047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.450163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.450195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.450464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.450497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.450685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.569 [2024-12-13 05:52:12.450717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.569 qpair failed and we were unable to recover it. 00:36:12.569 [2024-12-13 05:52:12.450912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.450945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.451267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.451299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.451562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.451596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.451792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.451824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.452012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.452045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.452234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.452267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.452459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.452491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.452691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.452724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.452911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.452944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.453217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.453249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.453461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.453494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.453760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.453792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.453914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.453946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.454229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.454268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.454511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.454545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.454679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.454711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.454903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.454935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.455137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.455170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.455427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.455470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.455648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.455680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.455874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.455907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.456101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.456134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.456399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.456431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.456592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.456625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.456824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.456857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.456989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.457021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.457274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.457306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.457594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.457628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.457818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.457850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.458111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.458144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.458384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.458417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.458550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.458583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.458753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.458785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.459049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.459081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.570 [2024-12-13 05:52:12.459210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.570 [2024-12-13 05:52:12.459243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.570 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.459420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.459460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.459696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.459729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.459868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.459901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.460197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.460230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.460408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.460440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.460649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.460682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.460888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.460920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.461035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.461067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.461339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.461371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.461569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.461603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.461780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.461812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.461983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.462015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.462234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.462266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.462523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.462557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.462673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.462705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.462900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.462932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.463115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.463148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.463323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.463356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.463483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.463520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.463645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.463678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.463860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.463892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.464110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.464142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.464391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.464424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.464667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.464701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.464895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.464928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.465108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.465139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.465402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.465435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.465626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.465660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.465863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.465896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.466103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.466135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.466316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.466350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.466671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.466705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.466955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.466987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.467195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.467228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.571 [2024-12-13 05:52:12.467501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.571 [2024-12-13 05:52:12.467536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.571 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.467664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.467696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.467817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.467849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.468030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.468064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.468181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.468213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.468482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.468516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.468711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.468744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.468882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.468915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.469190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.469223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.469414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.469456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.469603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.469636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.469848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.469882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.470019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.470051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.470247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.470279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.470523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.470557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.470683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.470715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.470904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.470936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.471069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.471103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.471229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.471261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.471441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.471483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.471607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.471640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.471762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.471794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.471984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.472016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.472201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.472234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.472371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.472415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.472572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.472606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.472719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.472751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.472876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.472908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.473027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.473059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.473258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.473290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.473479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.473513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.473783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.473816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.473940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.473973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.474187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.474220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.474329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.474362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.474490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.474524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.474639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.474672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.474872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.474905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.475037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.475070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.572 [2024-12-13 05:52:12.475254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.572 [2024-12-13 05:52:12.475286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.572 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.475399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.475431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.475561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.475594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.475706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.475738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.475865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.475897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.476080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.476112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.476216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.476248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.476374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.476406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.476527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.476561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.476746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.476779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.476955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.476987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.477093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.477126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f861c000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.477181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d55e0 (9): Bad file descriptor 00:36:12.573 [2024-12-13 05:52:12.477552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.477624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.477768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.477803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.477932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.477965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.478086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.478117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.478380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.478412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.478598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.478631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.478755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.478788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.478983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.479014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.479147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.479179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.479283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.479316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.479461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.479494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.479632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.479665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.479773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.479807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.479991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.480022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.480222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.480255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.480363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.480395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.480517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.480551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.480735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.480766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.480892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.480924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.481048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.481080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.481259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.481292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.481402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.481433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.481556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.481589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.481710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.481742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.481932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.481964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.482140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.482172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.482284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.573 [2024-12-13 05:52:12.482322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.573 qpair failed and we were unable to recover it. 00:36:12.573 [2024-12-13 05:52:12.482432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.482476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.482651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.482684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.482796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.482827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.483016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.483047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.483167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.483199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.483324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.483357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.483470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.483504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.483676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.483725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.483836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.483867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.484108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.484140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.484268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.484299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.484470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.484517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.484711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.484742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.484945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.484977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.485086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.485118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.485231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.485262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.485365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.485396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.485532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.485566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.485829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.485861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.485983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.486015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.486217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.486248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.486435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.486475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.486587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.486619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.486743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.486774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.486959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.486990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.487111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.487142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.487254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.487285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.487405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.487438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.487571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.487604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.487714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.487744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.487858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.487890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.488002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.488034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.488148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.488179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.488314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.488345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.488459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.488493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.488615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.488647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.488765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.488796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.488965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.488997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.489115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.489146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.574 qpair failed and we were unable to recover it. 00:36:12.574 [2024-12-13 05:52:12.489326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.574 [2024-12-13 05:52:12.489363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.489544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.489578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.489689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.489720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.489913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.489945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.490063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.490095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.490200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.490231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.490351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.490382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.490496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.490529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.490656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.490687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.490806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.490838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.490961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.490993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.491167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.491199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.491307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.491339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.491462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.491494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.491676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.491708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.491898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.491930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.492117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.492150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.492274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.492305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.492426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.492466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.492597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.492628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.492800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.492832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.493138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.493170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.493277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.493308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.493420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.493462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.493592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.493625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.493758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.493789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.493904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.493936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.494048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.494081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.494273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.494304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.494419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.494462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.494582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.494614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.494796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.494829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.575 qpair failed and we were unable to recover it. 00:36:12.575 [2024-12-13 05:52:12.495014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.575 [2024-12-13 05:52:12.495046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.495227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.495258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.495385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.495418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.495602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.495635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.495818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.495849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.495962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.495994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.496174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.496205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.496330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.496361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.496488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.496528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.496637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.496668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.496774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.496806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.496951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.496983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.497093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.497124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.497264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.497295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.497414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.497446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.497560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.497591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.497695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.497726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.497893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.497924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.498048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.498080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.498212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.498243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.498368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.498399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.498515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.498547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.498733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.498776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.498885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.498913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.499089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.499133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.499315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.499346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.499464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.499496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.499612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.499645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.499757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.499786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.499893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.499921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.500021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.500049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.500236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.500265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.500385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.500414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.500524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.500553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.500654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.500683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.500940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.500971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.501226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.501257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.501519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.501552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.501677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.576 [2024-12-13 05:52:12.501707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.576 qpair failed and we were unable to recover it. 00:36:12.576 [2024-12-13 05:52:12.501894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.501927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.502196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.502224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.502408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.502437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.502618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.502648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.502834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.502862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.503108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.503137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.503370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.503399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.503699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.503729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.503837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.503866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.504032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.504066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.504249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.504277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.504570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.504600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.504789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.504817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.504993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.505021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.505207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.505238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.505417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.505457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.505673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.505706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.505891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.505919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.506116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.506144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.506329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.506357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.506619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.506649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.506904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.506932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.507161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.507190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.507425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.507460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.507640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.507669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.507930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.507958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.508191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.508220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.508500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.508530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.508733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.508765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.509032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.509064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.509232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.509264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.509497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.509530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.509647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.509679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.509958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.509990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.510258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.510288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.510420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.510465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.510669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.510701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.577 [2024-12-13 05:52:12.510963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.577 [2024-12-13 05:52:12.510995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.577 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.511225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.511257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.511445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.511490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.511663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.511694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.511935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.511967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.512274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.512306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.512515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.512548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.512746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.512777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.513076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.513108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.513368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.513399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.513711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.513744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.514032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.514064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.514333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.514370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.514579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.514612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.514856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.514888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.515127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.515159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.515339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.515370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.515652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.515686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.515947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.515979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.516261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.516293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.516489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.516522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.516775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.516806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.516989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.517021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.517279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.517310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.517484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.517518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.517785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.517816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.518060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.518092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.518354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.518386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.518600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.518633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.518894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.518926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.519066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.519098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.519356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.519387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.519684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.519717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.519965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.519996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.520295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.520326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.520592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.520625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.520869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.520900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.521147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.521180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.521430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.578 [2024-12-13 05:52:12.521471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.578 qpair failed and we were unable to recover it. 00:36:12.578 [2024-12-13 05:52:12.521767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.579 [2024-12-13 05:52:12.521799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.579 qpair failed and we were unable to recover it. 00:36:12.579 [2024-12-13 05:52:12.522096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.579 [2024-12-13 05:52:12.522128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.579 qpair failed and we were unable to recover it. 00:36:12.579 [2024-12-13 05:52:12.522409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.579 [2024-12-13 05:52:12.522442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.579 qpair failed and we were unable to recover it. 00:36:12.579 [2024-12-13 05:52:12.522764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.579 [2024-12-13 05:52:12.522799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.579 qpair failed and we were unable to recover it. 00:36:12.579 [2024-12-13 05:52:12.522985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.579 [2024-12-13 05:52:12.523017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.579 qpair failed and we were unable to recover it. 00:36:12.858 [2024-12-13 05:52:12.523187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.858 [2024-12-13 05:52:12.523219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.858 qpair failed and we were unable to recover it. 00:36:12.858 [2024-12-13 05:52:12.523410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.858 [2024-12-13 05:52:12.523442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.858 qpair failed and we were unable to recover it. 00:36:12.858 [2024-12-13 05:52:12.523709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.858 [2024-12-13 05:52:12.523742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.858 qpair failed and we were unable to recover it. 00:36:12.858 [2024-12-13 05:52:12.524015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.858 [2024-12-13 05:52:12.524047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.858 qpair failed and we were unable to recover it. 00:36:12.858 [2024-12-13 05:52:12.524312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.858 [2024-12-13 05:52:12.524343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.858 qpair failed and we were unable to recover it. 00:36:12.858 [2024-12-13 05:52:12.524611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.524644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.524908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.524939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.525227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.525259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.525504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.525543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.525739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.525770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.525953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.525985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.526246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.526277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.526569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.526602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.526791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.526822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.527108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.527139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.527411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.527442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.527736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.527768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.528005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.528037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.528275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.528307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.528550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.528584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.528771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.528803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.528999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.529031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.529224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.529256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.529520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.529553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.529843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.529875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.529992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.530023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.530196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.530227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.530496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.530530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.530799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.530830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.531122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.531153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.531393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.531424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.531552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.531583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.531844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.531875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.532140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.532172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.532359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.532390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.532656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.532691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.532882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.532913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.533156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.533188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.533477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.533510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.533645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.533676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.533846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.533877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.534140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.534170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.534417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.534467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.859 qpair failed and we were unable to recover it. 00:36:12.859 [2024-12-13 05:52:12.534593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.859 [2024-12-13 05:52:12.534624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.534740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.534771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.535032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.535063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.535201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.535232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.535438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.535481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.535723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.535761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.535950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.535981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.536261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.536294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.536489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.536522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.536785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.536816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.537102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.537133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.537414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.537446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.537654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.537686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.537916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.537947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.538196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.538228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.538494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.538528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.538729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.538760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.539014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.539045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.539334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.539365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.539640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.539674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.539942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.539972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.540213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.540245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.540515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.540549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.540739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.540770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.541030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.541062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.541349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.541380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.541659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.541692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.541891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.541922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.542123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.542155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.542343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.542374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.542641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.542675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.542918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.542950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.543211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.543243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.543436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.543479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.543737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.543769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.544060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.544090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.544362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.544412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.544712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.544745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.860 [2024-12-13 05:52:12.544990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.860 [2024-12-13 05:52:12.545021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.860 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.545288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.545319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.545597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.545631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.545887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.545918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.546183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.546215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.546473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.546505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.546776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.546808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.547102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.547139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.547399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.547431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.547567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.547600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.547843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.547876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.548084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.548115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.548354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.548386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.548654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.548687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.548883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.548916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.549109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.549140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.549384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.549415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.549670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.549703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.549898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.549930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.550194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.550225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.550486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.550519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.550711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.550743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.550983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.551015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.551255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.551286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.551552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.551585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.551790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.551821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.552068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.552099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.552408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.552439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.552671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.552703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.552955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.552987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.553279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.553311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.553586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.553619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.553893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.553924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.554212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.554245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.554520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.554553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.554836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.554867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.555075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.555107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.555293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-12-13 05:52:12.555324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.861 qpair failed and we were unable to recover it. 00:36:12.861 [2024-12-13 05:52:12.555591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.555624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.555913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.555944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.556211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.556242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.556417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.556456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.556653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.556685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.556954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.556985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.557265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.557297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.557580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.557614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.557825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.557857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.558120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.558158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.558407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.558439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.558745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.558777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.559036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.559067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.559274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.559306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.559570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.559603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.559797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.559828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.560018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.560050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.560172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.560203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.560474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.560507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.560714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.560747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.560980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.561012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.561281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.561312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.561606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.561639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.561903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.561935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.562120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.562152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.562416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.562447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.562660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.562693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.562954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.562985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.563203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.563234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.563429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.563469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.563737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.563769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.563976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.564007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.564194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.564225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.564403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.564435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.564640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.564672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.564949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.564981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.565280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.565314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.565527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.565560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.862 [2024-12-13 05:52:12.565852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.862 [2024-12-13 05:52:12.565884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.862 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.566076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.566108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.566323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.566355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.566626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.566659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.566948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.566981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.567251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.567283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.567576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.567610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.567882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.567914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.568177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.568208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.568391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.568423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.568622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.568655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.568897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.568934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.569203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.569235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.569479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.569513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.569713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.569744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.570018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.570050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.570283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.570314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.570532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.570565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.570693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.570724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.570991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.571021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.571210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.571243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.571511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.571545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.571838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.571869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.572066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.572098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.572291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.572323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.572509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.572542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.572825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.572856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.573048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.573079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.573264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.573295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.573564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.863 [2024-12-13 05:52:12.573598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.863 qpair failed and we were unable to recover it. 00:36:12.863 [2024-12-13 05:52:12.573708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.573740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.573919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.573950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.574218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.574250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.574496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.574530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.574648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.574679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.574801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.574832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.575118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.575150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.575445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.575486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.575750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.575783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.576027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.576058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.576254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.576286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.576532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.576566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.576832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.576864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.577156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.577188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.577472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.577505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.577680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.577712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.577919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.577950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.578218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.578249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.578520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.578554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.578843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.578875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.579148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.579180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.579467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.579499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.579729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.579762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.580030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.580063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.580260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.580291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.580498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.580531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.580803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.580835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.581027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.581059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.581258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.581290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.581536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.581569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.581838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.581870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.582157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.582189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.582390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.582421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.582712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.582745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.582994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.583025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.583294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.583326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.583519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.583553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.583816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.583847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.584141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.864 [2024-12-13 05:52:12.584191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.864 qpair failed and we were unable to recover it. 00:36:12.864 [2024-12-13 05:52:12.584445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.584486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.584761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.584794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.584926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.584957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.585225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.585256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.585531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.585565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.585788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.585820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.585944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.585976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.586271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.586303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.586594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.586627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.586878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.586915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.587057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.587089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.587336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.587368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.587623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.587657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.587908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.587939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.588207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.588239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.588513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.588558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.588782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.588814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.589084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.589116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.589404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.589437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.589737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.589770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.590032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.590064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.590247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.590278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.590467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.590499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.590724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.590757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.591052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.591085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.591349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.591381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.591639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.591672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.591947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.591979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.592224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.592256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.592525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.592559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.592767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.592798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.593071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.593103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.593355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.593386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.593653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.593686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.593936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.593968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.594241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.594273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.594483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.594517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.594706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.594738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.594926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.865 [2024-12-13 05:52:12.594959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.865 qpair failed and we were unable to recover it. 00:36:12.865 [2024-12-13 05:52:12.595232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.595263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.595540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.595574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.595865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.595897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.596172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.596204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.596517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.596551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.596801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.596833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.596973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.597005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.597224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.597255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.597553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.597587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.597789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.597821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.598012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.598051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.598251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.598284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.598572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.598606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.598802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.598834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.599128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.599161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.599430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.599481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.599760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.599793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.599970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.600002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.600250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.600283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.600555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.600589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.600775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.600807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.601073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.601105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.601355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.601386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.601566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.601599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.601805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.601837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.602019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.602052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.602252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.602284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.602567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.602601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.602805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.602849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.603146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.603179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.603440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.603481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.603736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.603769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.604069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.604101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.604298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.604330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.604548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.604582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.604790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.604822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.605034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.605066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.605322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.866 [2024-12-13 05:52:12.605355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.866 qpair failed and we were unable to recover it. 00:36:12.866 [2024-12-13 05:52:12.605613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.605647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.605945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.605976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.606245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.606276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.606575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.606609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.606876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.606908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.607209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.607242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.607512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.607545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.607739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.607771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.608021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.608053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.608269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.608301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.608569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.608603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.608746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.608778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.608977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.609015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.609331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.609363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.609634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.609668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.609866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.609899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.610092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.610124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.610374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.610407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.610605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.610638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.610916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.610948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.611195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.611227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.611431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.611472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.611745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.611778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.611993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.612025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.612294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.612327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.612611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.612645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.612923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.612956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.613255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.613288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.613555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.613589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.613886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.613918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.614182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.614214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.614414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.614445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.614713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.614745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.615042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.615073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.867 qpair failed and we were unable to recover it. 00:36:12.867 [2024-12-13 05:52:12.615351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.867 [2024-12-13 05:52:12.615383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.615601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.615635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.615757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.615789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.616038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.616070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.616320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.616352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.616658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.616693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.616957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.616989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.617286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.617318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.617591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.617625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.617905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.617937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.618215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.618248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.618538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.618572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.618781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.618813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.619031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.619063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.619198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.619229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.619419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.619459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.619656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.619688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.619961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.619994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.620193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.620235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.620428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.620473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.620774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.620806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.620998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.621030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.621280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.621311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.621560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.621594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.621817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.621848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.622071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.622102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.622354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.622386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.622653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.622685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.622895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.622926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.623114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.623145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.623395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.623427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.623730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.623763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.624058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.624091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.624364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.624396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.624601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.624634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.624886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.624918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.868 qpair failed and we were unable to recover it. 00:36:12.868 [2024-12-13 05:52:12.625218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.868 [2024-12-13 05:52:12.625251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.625407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.625439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.625712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.625744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.625922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.625956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.626228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.626260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.626540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.626574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.626857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.626889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.627029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.627061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.627333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.627366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.627671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.627705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.627963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.627995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.628298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.628330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.628599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.628633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.628914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.628945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.629149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.629181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.629380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.629412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.629688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.629722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.630003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.630035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.630325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.630358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.630567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.630601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.630806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.630839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.631111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.631142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.631345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.631382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.631685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.631719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.631910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.631941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.632191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.632223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.632501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.632534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.632815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.632847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.633157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.633189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.633473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.633507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.633632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.633663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.633938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.633971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.634245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.634277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.634570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.634604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.634878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.634910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.635201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.635234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.635469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.635502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.635687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.635719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.635993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.869 [2024-12-13 05:52:12.636025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.869 qpair failed and we were unable to recover it. 00:36:12.869 [2024-12-13 05:52:12.636277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.636309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.636577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.636611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.636814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.636846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.637101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.637133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.637401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.637433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.637721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.637754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.637887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.637918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.638127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.638159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.638348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.638380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.638657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.638691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.638981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.639014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.639287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.639319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.639607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.639640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.639922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.639955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.640156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.640188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.640442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.640483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.640667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.640698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.640971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.641003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.641279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.641312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.641549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.641582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.641775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.641808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.642104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.642136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.642344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.642375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.642593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.642633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.642744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.642776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.643049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.643082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.643282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.643314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.643442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.643493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.643772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.643804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.644069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.644100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.644301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.644333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.644613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.644647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.644846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.644878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.645118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.645151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.645428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.645469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.645769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.645801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.645999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.646030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.646323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.646355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.646623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.870 [2024-12-13 05:52:12.646657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.870 qpair failed and we were unable to recover it. 00:36:12.870 [2024-12-13 05:52:12.646954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.646986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.647186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.647218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.647491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.647526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.647720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.647753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.647975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.648007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.648287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.648318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.648516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.648550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.648824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.648855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.649153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.649185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.649461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.649494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.649746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.649778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.650081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.650113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.650313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.650345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.650466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.650499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.650775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.650807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.651056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.651089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.651271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.651303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.651587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.651620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.651871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.651902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.652169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.652202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.652425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.652464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.652655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.652688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.652963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.652994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.653261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.653294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.653534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.653574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.653777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.653809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.654026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.654057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.654258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.654290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.654535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.654569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.654791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.654823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.655083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.655116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.655313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.655346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.655620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.655654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.655855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.655888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.656142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.656174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.656470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.656503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.656776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.656809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.871 qpair failed and we were unable to recover it. 00:36:12.871 [2024-12-13 05:52:12.656988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.871 [2024-12-13 05:52:12.657020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.657318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.657351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.657623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.657656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.657790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.657822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.658035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.658067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.658323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.658355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.658615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.658649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.658925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.658957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.659254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.659285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.659480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.659513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.659740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.659773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.660046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.660077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.660366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.660398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.660694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.660727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.660997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.661030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.661322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.661353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.661571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.661605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.661861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.661893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.662153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.662185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.662469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.662502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.662706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.662738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.663038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.663070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.663311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.663344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.663607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.663640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.663935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.663967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.664262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.664294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.664436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.664479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.664776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.664814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.665067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.665100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.665403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.665434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.665742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.665775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.666065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.666096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.666380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.666411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.666644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.872 [2024-12-13 05:52:12.666677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.872 qpair failed and we were unable to recover it. 00:36:12.872 [2024-12-13 05:52:12.666862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.666894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.667166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.667197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.667484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.667518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.667800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.667833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.668051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.668083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.668333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.668366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.668630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.668665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.668953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.668986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.669200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.669232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.669511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.669545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.669830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.669863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.670058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.670089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.670362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.670395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.670607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.670640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.670841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.670874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.671152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.671185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.671363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.671395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.671673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.671706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.671915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.671947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.672148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.672181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.672425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.672469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.672589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.672620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.672895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.672928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.673179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.673212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.673475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.673509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.673793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.673826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.674128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.674161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.674422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.674462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.674616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.674648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.674854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.674886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.675145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.675177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.675372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.675404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.675691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.675725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.676008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.676052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.676324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.676355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.676504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.676537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.676737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.676770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.677066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.677099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.677390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.677422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.873 qpair failed and we were unable to recover it. 00:36:12.873 [2024-12-13 05:52:12.677693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.873 [2024-12-13 05:52:12.677726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.678026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.678058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.678321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.678353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.678562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.678596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.678857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.678890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.679083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.679115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.679226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.679257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.679528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.679562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.679834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.679866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.680158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.680190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.680322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.680354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.680545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.680580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.680769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.680801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.680924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.680956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.681092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.681125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.681375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.681407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.681627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.681670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.681885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.681918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.682122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.682154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.682401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.682433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.682656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.682689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8624000b90 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.683067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.683147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.683391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.683431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.683687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.683722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.684019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.684054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.684260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.684292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.684565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.684601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.684822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.684855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.685055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.685088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.685355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.685387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.685598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.685634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.685826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.685859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.686132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.686165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.686443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.686486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.686762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.874 [2024-12-13 05:52:12.686795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.874 qpair failed and we were unable to recover it. 00:36:12.874 [2024-12-13 05:52:12.687020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.687055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.687309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.687343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.687653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.687688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.687914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.687947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.688180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.688214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.688411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.688444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.688648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.688683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.688962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.688994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.689279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.689312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.689469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.689503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.689779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.689813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.690072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.690105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.690288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.690321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.690466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.690508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.690790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.690825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.691024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.691056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.691253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.691286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.691474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.691508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.691717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.691751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.692004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.692037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.692298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.692330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.692583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.692634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.692903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.692937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.693145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.693177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.693432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.693486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.693758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.693792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.693984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.694017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.694248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.694282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.694522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.694556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.694832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.694864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.695060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.695093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.695241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.695274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.695547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.695581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.695809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.695842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.696033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.696066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.696265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.696298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.696573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.696608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.875 qpair failed and we were unable to recover it. 00:36:12.875 [2024-12-13 05:52:12.696848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.875 [2024-12-13 05:52:12.696881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.697061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.697094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.697363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.697395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.697681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.697722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.698014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.698047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.698319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.698352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.698538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.698573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.698792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.698825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.699016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.699048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.699271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.699304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.699602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.699637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.699902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.699935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.700184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.700217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.700492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.700527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.700732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.700764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.701015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.701048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.701280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.701313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.701591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.701624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.701909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.701942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.702220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.702253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.702540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.702575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.702788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.702821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.703036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.703068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.703275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.703309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.703593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.703627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.703833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.703866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.704007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.704040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.704255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.704289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.704596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.704630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.704815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.704848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.705148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.705187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.705412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.705444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.705726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.705759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.705987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.706020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.706216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.706248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.706503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.706537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.706843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.706876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.707159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.876 [2024-12-13 05:52:12.707192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.876 qpair failed and we were unable to recover it. 00:36:12.876 [2024-12-13 05:52:12.707379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.707411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.707680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.707714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.707995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.708027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.708309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.708342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.708567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.708601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.708787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.708820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.709080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.709113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.709366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.709398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.709700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.709733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.709913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.709946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.710219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.710252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.710532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.710566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.710849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.710882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.711164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.711196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.711481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.711515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.711746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.711779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.712028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.712061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.712320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.712353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.712533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.712567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.712844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.712877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.713163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.713197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.713494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.713531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.713794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.713832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.714105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.714141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.714426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.714480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.714744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.714780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.714964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.714998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.715137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.715172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.715458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.715494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.715793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.715826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.716024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.716057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.716311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.716348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.716654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.716691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.717054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.877 [2024-12-13 05:52:12.717130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.877 qpair failed and we were unable to recover it. 00:36:12.877 [2024-12-13 05:52:12.717429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.717476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.717763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.717796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.718009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.718041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.718294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.718326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.718555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.718588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.718866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.718899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.719142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.719174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.719320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.719351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.719546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.719579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.719835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.719869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.720120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.720153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.720350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.720382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.720663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.720707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.721007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.721040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.721275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.721307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.721575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.721609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.721818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.721850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.722101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.722133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.722348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.722381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.722579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.722613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.722815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.722847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.723119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.723151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.723430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.723470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.723748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.723780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.724086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.724119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.724378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.724410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.724662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.724703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.724846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.724879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.725160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.725193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.725394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.725427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.725665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.725699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.725951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.725983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.726287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.726320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.726588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.726624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.726904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.726937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.727215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.727248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.727472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.727506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.727756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.878 [2024-12-13 05:52:12.727789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.878 qpair failed and we were unable to recover it. 00:36:12.878 [2024-12-13 05:52:12.728052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.728084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.728338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.728385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.728678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.728712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.728987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.729020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.729224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.729257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.729463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.729496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.729773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.729806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.730032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.730064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.730273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.730307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.730583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.730617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.730901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.730933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.731190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.731222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.731470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.731504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.731758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.731792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.732074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.732106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.732393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.732428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.732706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.732739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.733010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.733042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.733240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.733273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.733535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.733569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.733844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.733877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.734157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.734190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.734468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.734502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.734784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.734817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.735099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.735132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.735385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.735417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.735726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.735762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.736043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.736077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.736217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.736251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.736534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.736569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.736768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.736801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.737077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.737111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.737365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.737397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.737655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.737690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.737967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.738001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.738277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.738310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.738595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.738630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.738856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.738889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.739173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.739206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.879 [2024-12-13 05:52:12.739390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.879 [2024-12-13 05:52:12.739424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.879 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.739696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.739730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.740001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.740034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.740242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.740276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.740489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.740530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.740747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.740780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.740976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.741010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.741266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.741299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.741598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.741632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.741896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.741929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.742219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.742253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.742480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.742514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.742789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.742822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.743085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.743119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.743361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.743394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.743578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.743612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.743883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.743917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.744207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.744240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.744515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.744552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.744809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.744842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.745098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.745131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.745337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.745369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.745639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.745674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.745883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.745915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.746198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.746231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.746508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.746543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.746737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.746769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.747046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.747080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.747369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.747402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.747622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.747657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.747881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.747921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.748142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.748175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.748476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.748512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.748758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.748791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.749045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.749078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.749345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.749379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.749657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.749692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.749978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.750010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.880 [2024-12-13 05:52:12.750288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.880 [2024-12-13 05:52:12.750322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.880 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.750522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.750557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.750811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.750843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.751144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.751180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.751473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.751506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.751771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.751808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.752037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.752073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.752325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.752365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.752666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.752705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.752984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.753021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.753295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.753332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.753614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.753649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.753792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.753824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.754099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.754133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.754386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.754419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.754732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.754770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.755063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.755096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.755321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.755362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.755647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.755682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.755864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.755903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.756116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.756150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.756404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.756443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.756589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.756622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.756820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.756854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.757129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.757162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.757457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.757492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.757704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.757736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.757998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.758031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.758330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.758363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.758602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.758637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.758866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.758899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.759176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.759208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.881 [2024-12-13 05:52:12.759350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.881 [2024-12-13 05:52:12.759383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.881 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.759673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.759708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.759984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.760017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.760283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.760316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.760631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.760666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.760859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.760892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.761085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.761118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.761392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.761425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.761713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.761747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.762027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.762060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.762344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.762377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.762657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.762691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.762826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.762859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.763070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.763104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.763311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.763349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.763604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.763639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.763893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.763926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.764223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.764255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.764476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.764510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.764764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.764795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.765093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.765125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.765399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.765431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.765655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.765688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.765875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.765908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.766120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.766153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.766357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.766390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.766679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.766714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.766914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.766947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.767225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.767259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.767387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.767418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.767636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.767670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.767888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.767920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.768138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.768171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.768375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.768408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.768616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.768650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.768859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.768892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.769078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.769111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.769314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.769347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.769546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.769581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.882 [2024-12-13 05:52:12.769857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.882 [2024-12-13 05:52:12.769891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.882 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.770144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.770177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.770376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.770410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.770708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.770742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.770926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.770960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.771237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.771270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.771551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.771585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.771867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.771899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.772183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.772216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.772505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.772539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.772811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.772844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.773140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.773173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.773489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.773522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.773795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.773830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.774058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.774091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.774371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.774404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.774615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.774650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.774856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.774889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.775085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.775118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.775316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.775349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.775542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.775576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.775854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.775888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.776083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.776116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.776394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.776427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.776690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.776723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.776946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.776979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.777196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.777229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.777480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.777514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.777814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.777847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.777958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.777991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.778189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.778223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.778501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.778535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.778733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.778765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.779020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.779053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.779255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.779288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.883 [2024-12-13 05:52:12.779566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.883 [2024-12-13 05:52:12.779600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.883 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.779880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.779912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.780197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.780230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.780490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.780524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.780637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.780671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.780898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.780931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.781181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.781214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.781413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.781446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.781663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.781701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.781889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.781922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.782195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.782229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.782441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.782484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.782732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.782765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.783064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.783096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.783299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.783332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.783547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.783581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.783858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.783891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.784181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.784214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.784414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.784446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.784652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.784686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.784887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.784920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.785122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.785156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.785369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.785403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.785690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.785724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.786001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.786033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.786179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.786212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.786426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.786469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.786760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.786794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.786993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.787026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.787224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.787257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.787526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.787561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.787843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.787876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.788065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.788098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.788352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.788385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.788692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.788726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.788984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.789024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.789171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.789205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.789480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.789514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.884 [2024-12-13 05:52:12.789726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.884 [2024-12-13 05:52:12.789759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.884 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.789959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.789991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.790247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.790279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.790409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.790441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.790726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.790759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.791037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.791069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.791352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.791385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.791572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.791606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.791883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.791916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.792119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.792153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.792424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.792466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.792752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.792785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.792921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.792955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.793152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.793185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.793463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.793496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.793696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.793731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.794028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.794060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.794319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.794352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.794614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.794648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.794920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.794954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.795095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.795127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.795426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.795477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.795737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.795771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.795900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.795933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.796201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.796233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.796361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.796394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.796715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.796748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.796946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.796978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.797240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.797272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.797490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.797523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.797719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.797753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.798057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.798090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.798348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.798382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.798687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.798721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.798919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.798952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.799233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.799267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.799547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.799580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.799861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.799894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.800179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.800213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.800434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.885 [2024-12-13 05:52:12.800474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.885 qpair failed and we were unable to recover it. 00:36:12.885 [2024-12-13 05:52:12.800659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.800692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.800956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.800989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.801271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.801303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.801527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.801561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.801817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.801850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.802034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.802066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.802362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.802395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.802703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.802737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.802953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.802985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.803258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.803290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.803520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.803555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.803869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.803902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.804209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.804242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.804423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.804463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.804736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.804769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.805055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.805087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.805278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.805310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.805587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.805621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.805903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.805935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.806075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.806107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.806402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.806435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.806659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.806692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.806959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.806992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.807287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.807320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.807521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.807555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.807836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.807874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.808068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.808101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.808401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.808433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.808726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.808760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.808960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.808994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.809118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.809150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.809424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.809466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.809685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.886 [2024-12-13 05:52:12.809718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.886 qpair failed and we were unable to recover it. 00:36:12.886 [2024-12-13 05:52:12.809968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.810001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.810273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.810305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.810486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.810519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.810735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.810768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.810876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.810908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.811183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.811215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.811349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.811383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.811634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.811668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.811854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.811887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.812090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.812123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.812328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.812361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.812623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.812657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.812901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.812934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.813071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.813104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.813376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.813409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.813727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.813761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.814038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.814071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.814357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.814390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.814670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.814704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.814927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.814965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.815148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.815181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.815376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.815408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.815611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.815645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.815898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.815931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.816184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.816216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.816439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.816484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.816784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.816817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.816940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.816972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.817168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.817200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.817491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.817525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.817720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.817754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.818010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.818043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.818239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.818272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.818475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.818510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.818705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.818738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.819018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.819051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.819307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.819340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.819556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.819591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.819845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.819877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.887 [2024-12-13 05:52:12.820180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.887 [2024-12-13 05:52:12.820214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.887 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.820503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.820537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.820811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.820843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.821137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.821170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.821443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.821501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.821769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.821802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.822077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.822110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.822399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.822438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.822655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.822688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.822961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.822994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.823274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.823305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.823535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.823569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.823787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.823820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.824012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.824045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.824266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.824299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.824479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.824513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.824787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.824820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.825128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.825161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.825418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.825467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.825758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.825791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.826007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.826040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.826267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.826300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.826577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.826612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.826895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.826928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.827231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.827264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.827471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.827505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.827762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.827795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.828087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.828119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.828333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.828366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.828641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.828676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.828946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.828978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.829275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.829308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.829579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.829613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.829904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.829936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.830214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.830247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.830530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.830564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.830826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.830859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.831053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.888 [2024-12-13 05:52:12.831087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.888 qpair failed and we were unable to recover it. 00:36:12.888 [2024-12-13 05:52:12.831337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.831369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.831641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.831675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.831882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.831916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.832193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.832226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.832479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.832513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.832802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.832835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.833113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.833145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.833428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.833479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.833743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.833776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.833899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.833931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.834154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.834187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.834441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.834485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.834682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.834715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.835010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.835042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.835333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.835365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.835566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.835601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.835875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.835908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.836062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.836095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.836389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.836422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.836713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.836747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.837021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.837054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.837371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.837404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.837712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.837746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.838033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.838067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.838292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.838326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.838509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.838544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.838678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.838710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.839003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.839036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.839178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.839210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.839401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.839434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.839734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.839768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.840037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.840070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.840370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.840403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.840593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.840626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.840837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.840869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.841092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.841125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.841377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.841409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.841656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.841701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.841897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.889 [2024-12-13 05:52:12.841929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.889 qpair failed and we were unable to recover it. 00:36:12.889 [2024-12-13 05:52:12.842136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.842169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.842348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.842380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.842654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.842689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.842965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.842999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.843178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.843211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.843395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.843428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.843630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.843663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.843936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.843969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.844257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.844290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.844518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.844552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.844771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.844804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.845083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.845116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.845349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.845381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.845650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.845685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.845834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.845866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.846116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.846148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.846340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.846372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.846580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.846615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.846813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.846846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.847125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.847157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.847436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.847479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.847757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.847790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.848033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.848066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.848365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.848398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.848699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.848733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.848954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.848992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.849266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.849300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.849599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.849634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.849845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.849877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.849991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.850024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.850275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.850308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.850584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.850618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.850895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.850928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.851158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.851192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.851471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.851505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.890 qpair failed and we were unable to recover it. 00:36:12.890 [2024-12-13 05:52:12.851727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.890 [2024-12-13 05:52:12.851760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.891 qpair failed and we were unable to recover it. 00:36:12.891 [2024-12-13 05:52:12.852040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.891 [2024-12-13 05:52:12.852073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.891 qpair failed and we were unable to recover it. 00:36:12.891 [2024-12-13 05:52:12.852384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.891 [2024-12-13 05:52:12.852417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.891 qpair failed and we were unable to recover it. 00:36:12.891 [2024-12-13 05:52:12.852623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.891 [2024-12-13 05:52:12.852656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.891 qpair failed and we were unable to recover it. 00:36:12.891 [2024-12-13 05:52:12.852938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.891 [2024-12-13 05:52:12.852972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.891 qpair failed and we were unable to recover it. 00:36:12.891 [2024-12-13 05:52:12.853173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.891 [2024-12-13 05:52:12.853206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.891 qpair failed and we were unable to recover it. 00:36:12.891 [2024-12-13 05:52:12.853491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.891 [2024-12-13 05:52:12.853525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.891 qpair failed and we were unable to recover it. 00:36:12.891 [2024-12-13 05:52:12.853778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.891 [2024-12-13 05:52:12.853811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.891 qpair failed and we were unable to recover it. 00:36:12.891 [2024-12-13 05:52:12.854010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.891 [2024-12-13 05:52:12.854043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.891 qpair failed and we were unable to recover it. 00:36:12.891 [2024-12-13 05:52:12.854258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.891 [2024-12-13 05:52:12.854291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:12.891 qpair failed and we were unable to recover it. 00:36:12.891 [2024-12-13 05:52:12.854492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.854526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.854827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.854860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.854992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.855025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.855227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.855260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.855508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.855543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.855838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.855872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.856087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.856119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.856397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.856430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.856649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.856683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.856903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.856936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.857117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.857149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.857417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.857457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.857732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.857765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.857986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.858019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.858243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.858277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.858570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.858604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.858876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.858909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.859202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.859235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.859512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.859545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.859732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.859765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.860028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.860061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.860266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.860300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.860573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.860608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.860895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.860928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.861203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.861237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.861469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.861502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.861706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.861738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.172 qpair failed and we were unable to recover it. 00:36:13.172 [2024-12-13 05:52:12.862033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.172 [2024-12-13 05:52:12.862066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.862342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.862375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.862589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.862624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.862902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.862934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.863240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.863273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.863410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.863442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.863734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.863767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.864039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.864072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.864307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.864340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.864522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.864556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.864803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.864836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.865115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.865147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.865355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.865388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.865675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.865710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.865988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.866021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.866201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.866234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.866369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.866402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.866685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.866718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.866925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.866958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.867255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.867289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.867557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.867591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.867785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.867823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.868031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.868064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.868255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.868286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.868425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.868469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.868684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.868717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.868994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.869027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.869313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.869346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.869644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.869678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.869893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.869926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.870125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.870158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.870433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.870474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.870719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.870752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.871021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.871054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.871333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.871366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.871649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.871685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.871963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.173 [2024-12-13 05:52:12.871996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.173 qpair failed and we were unable to recover it. 00:36:13.173 [2024-12-13 05:52:12.872284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.872317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.872526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.872559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.872805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.872839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.873106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.873139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.873390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.873422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.873731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.873765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.874060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.874092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.874384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.874417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.874740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.874774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.874977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.875009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.875267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.875299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.875490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.875530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.875808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.875841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.876110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.876142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.876438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.876480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.876759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.876791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.876992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.877024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.877303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.877336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.877635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.877669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.877918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.877951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.878264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.878297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.878585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.878620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.878894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.878926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.879152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.879184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.879386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.879419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.879653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.879687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.879909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.879942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.880089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.880122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.880397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.880430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.880705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.880739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.880999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.881032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.881233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.881266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.881547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.881581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.881798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.881831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.882032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.882065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.882267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.882299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.882575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.882609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.882888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.882921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.174 qpair failed and we were unable to recover it. 00:36:13.174 [2024-12-13 05:52:12.883182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.174 [2024-12-13 05:52:12.883220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.883518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.883552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.883810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.883843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.884144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.884178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.884440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.884487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.884769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.884802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.885007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.885039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.885316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.885349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.885634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.885668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.885948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.885981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.886263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.886296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.886576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.886610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.886869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.886902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.887180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.887213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.887501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.887537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.887812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.887845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.888131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.888165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.888444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.888487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.888778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.888812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.889014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.889046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.889252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.889285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.889561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.889596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.889788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.889821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.890021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.890054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.890331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.890364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.890635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.890669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.890886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.890918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.891043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.891077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.891378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.891410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.891605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.891638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.891828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.891860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.892133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.892166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.892443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.892487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.892682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.892714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.892914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.892947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.893222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.175 [2024-12-13 05:52:12.893255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.175 qpair failed and we were unable to recover it. 00:36:13.175 [2024-12-13 05:52:12.893504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.893538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.893800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.893833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.894023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.894056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.894254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.894286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.894479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.894513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.894780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.894815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.895065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.895097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.895378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.895411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.895652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.895687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.895950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.895982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.896282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.896315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.896584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.896618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.896912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.896945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.897212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.897245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.897527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.897563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.897844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.897877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.898059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.898092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.898371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.898404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.898616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.898650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.898913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.898946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.899127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.899159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.899491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.899525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.899820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.899853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.900047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.900079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.900353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.900386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.900518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.900563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.900818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.900850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.901128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.901162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.901300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.901333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.901585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.901619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.901915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.901948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.902218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.902251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.902530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.902570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.176 qpair failed and we were unable to recover it. 00:36:13.176 [2024-12-13 05:52:12.902769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.176 [2024-12-13 05:52:12.902803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.902982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.903016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.903292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.903326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.903519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.903552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.903808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.903840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.904070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.904103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.904344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.904377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.904664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.904698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.904976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.905010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.905202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.905235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.905523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.905556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.905840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.905874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.906055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.906087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.906344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.906377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.906576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.906611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.906754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.906787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.907060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.907093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.907355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.907387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.907691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.907725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.907920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.907952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.908215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.908248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.908546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.908581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.908848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.908880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.909166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.909199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.909480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.909514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.909819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.909852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.910116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.910155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.910351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.910384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.910601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.910635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.910853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.910887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.911183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.911216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.911409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.911443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.911689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.911723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.912025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.912058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.912325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.912357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.912636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.912670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.912908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.912941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.913207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.177 [2024-12-13 05:52:12.913240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.177 qpair failed and we were unable to recover it. 00:36:13.177 [2024-12-13 05:52:12.913537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.913572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.913818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.913850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.914164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.914197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.914489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.914523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.914799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.914832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.915009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.915042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.915317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.915351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.915548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.915582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.915787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.915820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.916095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.916128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.916383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.916415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.916680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.916714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.916965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.916999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.917306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.917339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.917618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.917653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.917904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.917937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.918145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.918178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.918458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.918492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.918770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.918803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.919078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.919111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.919403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.919436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.919711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.919744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.919945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.919978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.920254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.920287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.920486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.920520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.920702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.920735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.920865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.920898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.921029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.921061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.921358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.921390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.921773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.921850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.922163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.922201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.922489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.922525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.922814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.922846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.923064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.923096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.923288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.923321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.923523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.923557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.923754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.923787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.924037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.178 [2024-12-13 05:52:12.924069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.178 qpair failed and we were unable to recover it. 00:36:13.178 [2024-12-13 05:52:12.924369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.924402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.924694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.924728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.925002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.925034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.925313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.925345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.925557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.925602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.925780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.925813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.926030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.926063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.926334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.926367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.926631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.926666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.926964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.926996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.927215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.927248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.927551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.927584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.927845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.927878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.928181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.928214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.928478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.928511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.928646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.928678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.928883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.928916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.929140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.929172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.929461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.929495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.929773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.929805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.929938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.929970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.930173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.930205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.930480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.930515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.930712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.930744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.931018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.931051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.931180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.931212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.931436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.931479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.931664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.931697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.931972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.932005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.932256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.932289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.932491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.932525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.932796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.932829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.932980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.933012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.933282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.933315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.933599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.933632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.933916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.933948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.934141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.934173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.934442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.934485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.179 [2024-12-13 05:52:12.934702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.179 [2024-12-13 05:52:12.934735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.179 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.934921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.934953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.935168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.935200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.935429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.935472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.935749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.935782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.936056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.936088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.936297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.936341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.936489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.936522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.936784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.936816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.937115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.937148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.937418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.937462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.937738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.937772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.938045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.938078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.938371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.938403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.938686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.938720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.938842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.938874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.939149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.939182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.939422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.939463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.939735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.939768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.940045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.940077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.940279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.940312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.940525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.940559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.940841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.940873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.941154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.941186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.941471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.941504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.941787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.941821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.942124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.942157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.942420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.942460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.942713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.942745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.943046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.943078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.943372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.943404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.943607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.943641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.943824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.943858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.944138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.944171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.944443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.944488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.944789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.944822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.945093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.945126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.945344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.945375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.945649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.945683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.180 [2024-12-13 05:52:12.945867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.180 [2024-12-13 05:52:12.945899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.180 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.946169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.946201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.946500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.946533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.946803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.946835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.947107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.947140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.947354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.947386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.947685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.947718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.947988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.948027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.948217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.948250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.948429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.948469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.948695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.948727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.948909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.948943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.949214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.949246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.949499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.949533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.949797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.949829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.950092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.950125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.950403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.950436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.950721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.950754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.950951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.950983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.951234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.951266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.951459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.951492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.951701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.951735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.951936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.951968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.952241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.952274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.952404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.952436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.952721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.952753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.953016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.953048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.953306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.953339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.953635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.953668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.953896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.953928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.954131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.954164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.181 qpair failed and we were unable to recover it. 00:36:13.181 [2024-12-13 05:52:12.954348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.181 [2024-12-13 05:52:12.954379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.954586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.954619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.954891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.954923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.955227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.955266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.955527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.955561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.955840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.955872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.956086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.956118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.956369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.956402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.956689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.956723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.956997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.957030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.957245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.957277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.957555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.957589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.957835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.957868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.958131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.958162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.958457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.958491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.958764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.958796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.959079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.959111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.959400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.959433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.959708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.959742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.960034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.960066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.960340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.960372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.960664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.960698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.960921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.960953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.961230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.961262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.961474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.961509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.961792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.961825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.962033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.962065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.962299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.962331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.962552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.962586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.962797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.962829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.963113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.963145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.963278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.963310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.963585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.963619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.963920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.963953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.964154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.964187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.964469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.964503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.964738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.964770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.965038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.965071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.965349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.965381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-12-13 05:52:12.965604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.182 [2024-12-13 05:52:12.965638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.965917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.965949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.966228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.966261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.966551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.966586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.966802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.966840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.967121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.967154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.967380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.967412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.967694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.967727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.968011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.968043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.968328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.968360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.968545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.968578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.968780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.968812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.969017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.969050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.969307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.969340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.969608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.969642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.969829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.969861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.970076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.970109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.970361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.970394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.970711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.970745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.970965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.970998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.971283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.971315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.971566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.971601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.971906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.971939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.972119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.972150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.972431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.972472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.972676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.972709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.972961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.972993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.973298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.973330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.973614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.973648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.973827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.973859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.974057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.974089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.974386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.974418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.974713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.974746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.975006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.975039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.975221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.975254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.975471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.975504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.975708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.975740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.975938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.975971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.976247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.976279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-12-13 05:52:12.976558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.183 [2024-12-13 05:52:12.976592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.976870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.976903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.977186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.977219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.977471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.977505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.977707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.977740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.977932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.977970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.978218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.978250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.978531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.978565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.978853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.978886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.979106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.979138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.979410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.979441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.979677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.979710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.979922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.979954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.980250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.980282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.980551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.980585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.980768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.980800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.981023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.981055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.981258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.981290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.981483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.981516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.981787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.981820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.982007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.982039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.982336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.982368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.982638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.982672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.982951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.982983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.983273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.983305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.983579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.983613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.983906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.983938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.984215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.984247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.984527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.984561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.984704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.984737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.985009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.985042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.985224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.985256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.985439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.985497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.985684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.985716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.985992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.986025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.986300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.986333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.986616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.986650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.986930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.986962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-12-13 05:52:12.987254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.184 [2024-12-13 05:52:12.987286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.987481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.987514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.987717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.987749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.987963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.987994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.988136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.988168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.988393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.988425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.988714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.988747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.988940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.988978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.989276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.989309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.989572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.989605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.989805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.989838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.990045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.990077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.990329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.990361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.990611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.990645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.990922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.990955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.991156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.991188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.991443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.991499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.991691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.991723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.991990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.992022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.992318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.992351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.992531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.992564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.992790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.992823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.993088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.993121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.993418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.993458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.993723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.993756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.994046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.994078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.994309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.994341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.994599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.994633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.994931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.994963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.995250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.995283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.995482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.995516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.995777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.995809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.996104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.996136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.996282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.996314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.996598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.996631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.996953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.996985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.997106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.997138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.997412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.997444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-12-13 05:52:12.997739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.185 [2024-12-13 05:52:12.997772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:12.997994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:12.998026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:12.998293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:12.998325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:12.998519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:12.998553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:12.998772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:12.998805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:12.999104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:12.999136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:12.999407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:12.999440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:12.999660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:12.999693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:12.999919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:12.999951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.000227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.000265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.000551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.000585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.000860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.000893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.001098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.001130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.001355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.001388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.001673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.001706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.001987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.002020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.002301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.002334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.002557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.002592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.002785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.002817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.003077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.003110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.003361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.003392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.003596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.003630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.003810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.003843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.004102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.004135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.004390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.004423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.004716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.004750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.005038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.005070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.005271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.005304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.005491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.005524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.005716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.005749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.006017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.006050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.006324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.006356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.006640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.006674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.006956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.006988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.186 [2024-12-13 05:52:13.007269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.186 [2024-12-13 05:52:13.007301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.186 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.007590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.007624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.007909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.007941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.008223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.008255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.008569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.008603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.008725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.008758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.008961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.008994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.009265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.009297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.009505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.009540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.009837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.009870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.010132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.010165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.010472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.010506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.010766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.010798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.011102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.011135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.011394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.011427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.011688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.011728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.011921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.011954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.012232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.012265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.012489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.012523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.012777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.012810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.013070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.013102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.013303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.013336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.013533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.013568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.013767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.013799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.013977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.014010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.014259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.014292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.014550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.014584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.014884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.014917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.015104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.015135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.015419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.015460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.015736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.015769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.016021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.016052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.016266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.016299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.016568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.016602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.016899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.016932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.017199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.017231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.017430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.017470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.017724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.017756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.187 qpair failed and we were unable to recover it. 00:36:13.187 [2024-12-13 05:52:13.018049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.187 [2024-12-13 05:52:13.018081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.018286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.018320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.018594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.018628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.018809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.018842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.019066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.019100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.019298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.019330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.019550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.019584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.019861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.019894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.020155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.020187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.020489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.020523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.020723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.020756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.021034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.021067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.021318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.021350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.021533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.021567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.021796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.021828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.022105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.022137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.022384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.022417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.022699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.022738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.022947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.022980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.023223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.023255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.023458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.023492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.023764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.023797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.024071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.024104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.024324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.024356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.024632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.024666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.024889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.024922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.025205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.025239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.025541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.025574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.025837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.025870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.026069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.026102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.026241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.026273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.026471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.026504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.026702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.026735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.026933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.026966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.027153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.027186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.027376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.027408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.027635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.027669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.027920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.027952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.028155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.028187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.188 [2024-12-13 05:52:13.028471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.188 [2024-12-13 05:52:13.028505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.188 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.028689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.028722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.028979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.029011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.029150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.029183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.029469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.029503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.029808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.029841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.030035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.030067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.030338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.030371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.030654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.030689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.030898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.030931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.031213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.031246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.031470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.031504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.031710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.031742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.031920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.031953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.032062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.032094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.032321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.032353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.032654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.032687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.032886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.032919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.033111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.033149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.033332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.033364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.033638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.033673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.033952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.033984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.034179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.034211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.034473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.034508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.034713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.034746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.035020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.035053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.035330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.035363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.035571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.035606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.035907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.035939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.036254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.036286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.036570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.036604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.036828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.036861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.037144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.037178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.037398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.037431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.037719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.037753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.038027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.038060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.038284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.038316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.038517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.038551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.038857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.189 [2024-12-13 05:52:13.038889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.189 qpair failed and we were unable to recover it. 00:36:13.189 [2024-12-13 05:52:13.039188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.039220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.039473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.039506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.039702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.039734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.040011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.040043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.040327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.040358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.040585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.040619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.040817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.040850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.041121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.041153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.041355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.041387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.041669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.041702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.041885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.041918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.042189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.042221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.042501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.042536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.042820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.042852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.043131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.043163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.043477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.043510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.043735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.043767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.044024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.044056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.044170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.044203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.044390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.044428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.044660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.044693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.044912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.044944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.045120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.045153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.045430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.045472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.045693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.045726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.046004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.046037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.046241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.046272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.046471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.046505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.046777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.046810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.047114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.047146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.047357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.047389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.190 [2024-12-13 05:52:13.047676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.190 [2024-12-13 05:52:13.047711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.190 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.047918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.047951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.048191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.048223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.048432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.048476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.048759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.048792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.049014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.049046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.049239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.049271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.049530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.049564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.049675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.049708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.049914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.049946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.050231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.050264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.050516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.050550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.050810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.050842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.051062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.051094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.051346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.051379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.051691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.051726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.052001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.052033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.052257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.052290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.052418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.052471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.052749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.052781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.053049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.053082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.053271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.053304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.053503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.053537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.053725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.053758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.054033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.054065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.054263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.054295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.054562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.054596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.054891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.054924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.055211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.055250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.055457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.055491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.055742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.055774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.056043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.056076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.056370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.056402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.056676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.056710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.057002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.057035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.057324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.057356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.057615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.057648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.057949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.057982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.058273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.058305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.191 qpair failed and we were unable to recover it. 00:36:13.191 [2024-12-13 05:52:13.058511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.191 [2024-12-13 05:52:13.058544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.058746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.058779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.058978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.059011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.059240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.059273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.059554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.059588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.059796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.059828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.060127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.060160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.060444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.060487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.060709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.060742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.060945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.060977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.061231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.061263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.061480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.061514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.061792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.061824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.062079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.062111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.062367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.062400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.062707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.062740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.063046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.063079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.063325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.063358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.063674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.063707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.063950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.063982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.064248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.064281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.064574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.064608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.064884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.064917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.065168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.065200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.065392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.065424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.065741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.065775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.065992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.066024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.066148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.066180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.066364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.066397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.066661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.066700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.066977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.067010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.067310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.067343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.067537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.067570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.067843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.067876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.068084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.068117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.068412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.068445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.068719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.068752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.069030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.069063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.069342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.069374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.192 qpair failed and we were unable to recover it. 00:36:13.192 [2024-12-13 05:52:13.069660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.192 [2024-12-13 05:52:13.069694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.069999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.070032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.070291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.070324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.070582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.070615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.070818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.070851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.071127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.071159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.071363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.071395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.071601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.071636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.071887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.071919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.072197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.072230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.072510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.072544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.072827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.072859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.073084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.073116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.073393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.073425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.073558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.073591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.073889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.073922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.074144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.074176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.074366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.074398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.074703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.074737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.075005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.075036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.075291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.075324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.075626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.075660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.075922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.075954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.076207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.076240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.076438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.076496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.076771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.076804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.077051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.077083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.077355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.077388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.077671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.077706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.077988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.078020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.078248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.078287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.078563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.078598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.078848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.078880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.079164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.079196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.079391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.079424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.079709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.079741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.080021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.080053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.080338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.080372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.193 qpair failed and we were unable to recover it. 00:36:13.193 [2024-12-13 05:52:13.080658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.193 [2024-12-13 05:52:13.080691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.080840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.080873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.081145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.081177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.081465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.081498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.081777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.081809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.082085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.082117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.082319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.082351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.082624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.082658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.082934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.082966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.083255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.083287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.083566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.083600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.083742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.083774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.083972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.084005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.084150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.084182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.084480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.084515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.084738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.084771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.085021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.085053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.085266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.085299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.085504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.085538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.085846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.085880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.086074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.086105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.086357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.086390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.086677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.086711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.086986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.087018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.087296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.087328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.087530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.087565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.087706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.087738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.087920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.087952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.088225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.088258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.088531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.088565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.088857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.088889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.089157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.089190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.089487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.089527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.089777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.089810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.090061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.090093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.194 [2024-12-13 05:52:13.090372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.194 [2024-12-13 05:52:13.090404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.194 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.090676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.090710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.091007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.091040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.091310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.091342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.091635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.091669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.091820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.091852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.092072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.092104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.092308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.092340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.092542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.092576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.092845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.092878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.093079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.093111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.093388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.093421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.093706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.093740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.093948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.093980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.094261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.094293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.094522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.094556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.094809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.094841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.095147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.095179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.095394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.095426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.095718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.095751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.096027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.096060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.096279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.096311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.096506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.096540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.096730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.096762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.096956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.096994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.097267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.097300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.097583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.097617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.097817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.097849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.098100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.098133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.098433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.098489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.098738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.098770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.098958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.098990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.099181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.099214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.195 [2024-12-13 05:52:13.099487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.195 [2024-12-13 05:52:13.099521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.195 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.099725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.099757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.100059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.100091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.100354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.100386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.100640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.100674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.100919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.100952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.101102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.101134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.101436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.101481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.101758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.101791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.102064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.102097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.102388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.102420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.102639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.102672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.102795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.102828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.103103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.103135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.103317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.103350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.103621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.103655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.103935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.103967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.104117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.104150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.104435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.104477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.104664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.104697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.104978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.105010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.105209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.105241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.105435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.105486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.105671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.105702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.105897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.105929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.106130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.106163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.106437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.106482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.106785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.106818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.107033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.107065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.107316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.107348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.107616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.107650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.107932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.107971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.108122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.108155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.108359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.108391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.108622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.108656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.108937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.108969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.109253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.109285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.109561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.109595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.109781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.196 [2024-12-13 05:52:13.109813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.196 qpair failed and we were unable to recover it. 00:36:13.196 [2024-12-13 05:52:13.110081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.110113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.110413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.110445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.110731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.110764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.110970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.111003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.111220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.111252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.111506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.111540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.111759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.111792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.112081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.112113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.112389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.112422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.112734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.112767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.112948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.112979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.113204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.113236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.113373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.113405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.113686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.113719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.113982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.114016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.114273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.114305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.114576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.114610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.114890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.114922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.115072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.115104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.115308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.115341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.115633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.115667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.115779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.115811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.116084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.116117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.116337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.116369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.116595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.116629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.116851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.116883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.117101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.117133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.117386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.117419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.117731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.117764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.118041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.118073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.118358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.118391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.118598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.118633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.118884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.118922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.119127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.119159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.119284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.119316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.119589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.119622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.119823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.119855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.120126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.120158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.120357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.197 [2024-12-13 05:52:13.120389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.197 qpair failed and we were unable to recover it. 00:36:13.197 [2024-12-13 05:52:13.120644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.120678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.120982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.121014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.121304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.121337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.121611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.121645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.121866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.121898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.122178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.122211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.122499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.122532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.122683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.122716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.122919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.122951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.123158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.123190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.123392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.123424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.123637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.123671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.123935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.123968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.124216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.124248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.124444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.124489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.124745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.124779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.125077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.125109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.125380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.125412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.125710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.125745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.125958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.125991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.126248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.126281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.126577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.126611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.126818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.126851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.127124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.127156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.127445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.127487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.127762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.127794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.128073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.128105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.128394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.128426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.128702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.128735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.128912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.128945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.129160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.129192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.129458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.129491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.129760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.129792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.130081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.130119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.130324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.130356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.130537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.130572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.130847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.130879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.131152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.131185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.198 qpair failed and we were unable to recover it. 00:36:13.198 [2024-12-13 05:52:13.131396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.198 [2024-12-13 05:52:13.131428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.131688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.131722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.131904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.131936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.132133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.132166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.132359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.132391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.132672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.132705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.132830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.132862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.133058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.133089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.133335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.133368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.133626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.133660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.133799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.133831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.134111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.134143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.134422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.134466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.134741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.134773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.134909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.134941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.135140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.135172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.135423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.135465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.135578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.135610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.135861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.135894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.136196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.136229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.136420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.136462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.136714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.136746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.137030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.137062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.137365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.137398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.137679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.137712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.137991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.138024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.138310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.138343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.138614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.138647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.138855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.138887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.139093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.139125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.139397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.139429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.139719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.139752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.139947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.139980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.140284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.199 [2024-12-13 05:52:13.140316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.199 qpair failed and we were unable to recover it. 00:36:13.199 [2024-12-13 05:52:13.140577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.140611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.140868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.140907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.141201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.141233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.141519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.141553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.141831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.141863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.142152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.142185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.142439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.142483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.142778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.142811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.143088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.143120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.143408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.143441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.143721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.143754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.144039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.144071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.144313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.144345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.144567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.144601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.144868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.144900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.145187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.145220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.145501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.145535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.145753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.145785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.146040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.146072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.146331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.146363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.146544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.146578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.146719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.146751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.146938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.146970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.147263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.147296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.147573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.147606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.147833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.147865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.148144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.148177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.148403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.148436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.148713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.148747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.148961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.148994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.149174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.149207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.149347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.149379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.149656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.149690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.149989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.150021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.150235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.150267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.150519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.150553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.150854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.150887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.151157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.151189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.151339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.151371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.200 qpair failed and we were unable to recover it. 00:36:13.200 [2024-12-13 05:52:13.151585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.200 [2024-12-13 05:52:13.151618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.151825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.151858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.152135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.152173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.152443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.152485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.152764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.152798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 541971 Killed "${NVMF_APP[@]}" "$@" 00:36:13.201 [2024-12-13 05:52:13.153037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.153072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.153281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.153314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.153467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.153500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:13.201 [2024-12-13 05:52:13.153776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.153813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.154027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.154059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:13.201 [2024-12-13 05:52:13.154338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.154373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:13.201 [2024-12-13 05:52:13.154662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.154699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:13.201 [2024-12-13 05:52:13.154940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.154975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.201 [2024-12-13 05:52:13.155272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.155308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.155580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.155615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.155866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.155899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.156193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.156226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.156528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.156563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.156823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.156856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.157143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.157176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.157464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.157498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.157655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.157687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.157971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.158004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.158155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.158191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.158418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.158463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.158661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.158693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.158946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.158984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.159184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.159217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.159493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.159530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.159787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.159819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.160026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.160058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.160259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.160291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.160489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.160523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.160794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.160826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.161079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.201 [2024-12-13 05:52:13.161111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.201 qpair failed and we were unable to recover it. 00:36:13.201 [2024-12-13 05:52:13.161371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.161404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 [2024-12-13 05:52:13.161554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.161601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 [2024-12-13 05:52:13.161839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.161892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 [2024-12-13 05:52:13.162127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.162166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=542665 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 [2024-12-13 05:52:13.162372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.162418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 542665 00:36:13.202 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:13.202 [2024-12-13 05:52:13.162714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.162754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 [2024-12-13 05:52:13.162902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.162936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 542665 ']' 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 [2024-12-13 05:52:13.163156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.163189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.202 [2024-12-13 05:52:13.163492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.163530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:13.202 [2024-12-13 05:52:13.163836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.163886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b9 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.202 0 with addr=10.0.0.2, port=4420 00:36:13.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:13.202 [2024-12-13 05:52:13.164207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.164251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.202 [2024-12-13 05:52:13.164494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.164532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 [2024-12-13 05:52:13.164809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.164846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 [2024-12-13 05:52:13.165125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.165168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 [2024-12-13 05:52:13.165446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.165493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 [2024-12-13 05:52:13.165725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.165758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.202 [2024-12-13 05:52:13.165888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.202 [2024-12-13 05:52:13.165934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.202 qpair failed and we were unable to recover it. 00:36:13.515 [2024-12-13 05:52:13.166245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-12-13 05:52:13.166302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.515 qpair failed and we were unable to recover it. 00:36:13.515 [2024-12-13 05:52:13.166524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-12-13 05:52:13.166575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.515 qpair failed and we were unable to recover it. 00:36:13.515 [2024-12-13 05:52:13.166765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-12-13 05:52:13.166826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.515 qpair failed and we were unable to recover it. 00:36:13.515 [2024-12-13 05:52:13.167180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-12-13 05:52:13.167230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.515 qpair failed and we were unable to recover it. 00:36:13.515 [2024-12-13 05:52:13.167502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-12-13 05:52:13.167544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.515 qpair failed and we were unable to recover it. 00:36:13.515 [2024-12-13 05:52:13.167774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-12-13 05:52:13.167811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.515 qpair failed and we were unable to recover it. 00:36:13.515 [2024-12-13 05:52:13.168048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-12-13 05:52:13.168086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.515 qpair failed and we were unable to recover it. 00:36:13.515 [2024-12-13 05:52:13.168359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-12-13 05:52:13.168392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.515 qpair failed and we were unable to recover it. 00:36:13.515 [2024-12-13 05:52:13.168624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-12-13 05:52:13.168660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.515 qpair failed and we were unable to recover it. 00:36:13.515 [2024-12-13 05:52:13.168869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-12-13 05:52:13.168903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.515 qpair failed and we were unable to recover it. 00:36:13.515 [2024-12-13 05:52:13.169110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.515 [2024-12-13 05:52:13.169144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.515 qpair failed and we were unable to recover it. 00:36:13.515 [2024-12-13 05:52:13.169423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.169477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.169709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.169745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.169996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.170029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.170172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.170204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.170465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.170500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.170704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.170740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.170896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.170964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.171244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.171309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.171623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.171663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.171822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.171858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.172086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.172123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.172408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.172446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.172628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.172665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.172929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.172965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.173163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.173198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.173400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.173437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.173661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.173698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.173964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.174018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.174337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.174371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.174596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.174636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.174916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.174948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.175159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.175192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.175485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.175521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.175729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.175764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.176019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.176053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.176236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.176279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.176474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.176511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.176704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.176738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.177020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.177052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.177305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.177337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.177600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.177633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.177888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.177920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.178224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.178257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.178399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.178431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.178647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.178684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.178889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.178922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.179144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.179177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.179476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.179511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.179660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.516 [2024-12-13 05:52:13.179692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.516 qpair failed and we were unable to recover it. 00:36:13.516 [2024-12-13 05:52:13.179833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.179866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.180144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.180177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.180502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.180536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.180681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.180714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.180929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.180962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.181272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.181303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.181575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.181609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.181886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.181924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.182133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.182166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.182440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.182485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.182777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.182811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.182993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.183024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.183161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.183194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.183488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.183522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.183725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.183757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.184082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.184115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.184342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.184375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.184631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.184665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.184974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.185006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.185303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.185336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.185540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.185574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.185779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.185810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.186007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.186040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.186232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.186263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.186462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.186496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.186679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.186712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.186852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.186890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.187144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.187176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.187289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.187323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.187597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.187631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.187773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.187805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.187943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.187976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.188094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.188125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.188340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.188373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.188627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.188662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.188777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.188810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.188937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.188970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.189245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.189277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.517 [2024-12-13 05:52:13.189473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.517 [2024-12-13 05:52:13.189509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.517 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.189693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.189726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.190006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.190039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.190168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.190201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.190314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.190346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.190498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.190533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.190758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.190791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.190925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.190957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.191202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.191234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.191414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.191460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.191653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.191686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.191890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.191921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.192128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.192161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.192285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.192316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.192444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.192491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.192756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.192789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.192970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.193003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.193226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.193256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.193391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.193421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.193617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.193648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.193800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.193830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.193954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.193984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.194192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.194222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.194414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.194445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.194689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.194719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.194999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.195029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.195181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.195211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.195492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.195524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.195711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.195747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.195873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.195902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.196015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.196045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.196187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.196216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.196406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.196436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.196636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.196668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.196940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.196970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.197097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.197128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.197324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.197354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.197575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.197607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.197790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.197819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.518 [2024-12-13 05:52:13.198100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.518 [2024-12-13 05:52:13.198130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.518 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.198342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.198372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.198591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.198622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.198821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.198864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.199010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.199042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.199233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.199263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.199468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.199499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.199709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.199739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.199892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.199923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.200123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.200152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.200365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.200396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.200713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.200745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.201006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.201036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.201308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.201337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.201637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.201669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.201804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.201834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.202083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.202160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.202392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.202427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.202588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.202621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.202899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.202929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.203187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.203218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.203481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.203515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.203717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.203748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.203953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.203984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.204202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.204233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.204374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.204405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.204630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.204662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.204844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.204877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.205077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.205107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.205291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.205322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.205468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.205504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.205793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.205824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.206092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.206123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.206339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.206370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.206571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.206603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.206816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.206846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.206977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.207007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.207141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.207171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.207446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.207488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.519 qpair failed and we were unable to recover it. 00:36:13.519 [2024-12-13 05:52:13.207620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.519 [2024-12-13 05:52:13.207652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.207783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.207814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.208086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.208116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.208300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.208331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c76a0 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.208550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.208586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.208782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.208813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.209031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.209061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.209312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.209342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.209614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.209646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.209765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.209795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.210069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.210099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.210298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.210327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.210601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.210633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.210852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.210883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.211078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.211109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.211294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.211324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.211492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.211524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.211737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.211767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.211968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.211999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.212203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.212233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.212511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.212542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.212670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.212699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.212901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.212931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.213064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.213094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.213309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.213339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.213525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.213557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.213607] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:13.520 [2024-12-13 05:52:13.213669] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:13.520 [2024-12-13 05:52:13.213702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.213735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.213936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.213966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.214108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.214138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.214331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.214362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.214552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.214591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.214724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.214755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.520 qpair failed and we were unable to recover it. 00:36:13.520 [2024-12-13 05:52:13.214871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.520 [2024-12-13 05:52:13.214902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.215087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.215116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.215393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.215425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.215625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.215659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.215786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.215819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.216025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.216057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.216321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.216354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.216481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.216520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.216787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.216820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.216943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.216974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.217102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.217132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.217340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.217373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.217495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.217529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.217663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.217697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.217829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.217864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.218060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.218093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.218354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.218388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.218603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.218636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.218765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.218797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.219103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.219135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.219410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.219446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.219753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.219784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.219925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.219955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.220079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.220109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.220296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.220336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.220617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.220666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.220902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.220939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.221083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.221124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.221375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.221418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.221627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.221669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.221883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.221924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.222193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.222231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.222494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.222537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.222686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.222724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.223004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.223044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.223283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.223321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.223516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.223559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.223779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.223814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.224032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.224073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.521 [2024-12-13 05:52:13.224268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.521 [2024-12-13 05:52:13.224305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.521 qpair failed and we were unable to recover it. 00:36:13.522 [2024-12-13 05:52:13.224515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.522 [2024-12-13 05:52:13.224556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.522 qpair failed and we were unable to recover it. 00:36:13.522 [2024-12-13 05:52:13.224690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.522 [2024-12-13 05:52:13.224727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.522 qpair failed and we were unable to recover it. 00:36:13.522 [2024-12-13 05:52:13.224997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.522 [2024-12-13 05:52:13.225033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.522 qpair failed and we were unable to recover it. 00:36:13.522 [2024-12-13 05:52:13.225311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.522 [2024-12-13 05:52:13.225351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.522 qpair failed and we were unable to recover it. 00:36:13.522 [2024-12-13 05:52:13.225545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.522 [2024-12-13 05:52:13.225585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.522 qpair failed and we were unable to recover it. 00:36:13.522 [2024-12-13 05:52:13.225809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.522 [2024-12-13 05:52:13.225842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.522 qpair failed and we were unable to recover it. 00:36:13.522 [2024-12-13 05:52:13.226113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.522 [2024-12-13 05:52:13.294828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:13.522 [2024-12-13 05:52:13.317803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.522 [2024-12-13 05:52:13.317840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.522 [2024-12-13 05:52:13.317848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.522 [2024-12-13 05:52:13.317855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.522 [2024-12-13 05:52:13.317860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.522 [2024-12-13 05:52:13.319190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:13.522 [2024-12-13 05:52:13.319210] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:13.522 [2024-12-13 05:52:13.319321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:13.522 [2024-12-13 05:52:13.319322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.522 Malloc0 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.522 [2024-12-13 05:52:13.489081] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.522 [2024-12-13 05:52:13.517318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.522 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.782 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.782 05:52:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 541994 00:36:13.782 [2024-12-13 05:52:13.632106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8618000b90 with addr=10.0.0.2, port=4420 00:36:13.782 qpair failed and we were unable to recover it. 00:36:13.782 [2024-12-13 05:52:13.640144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.782 [2024-12-13 05:52:13.640274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.782 [2024-12-13 05:52:13.640323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.782 [2024-12-13 05:52:13.640346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.782 [2024-12-13 05:52:13.640368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.782 [2024-12-13 05:52:13.640420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.782 qpair failed and we were unable to recover it. 00:36:13.782 [2024-12-13 05:52:13.649944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.782 [2024-12-13 05:52:13.650042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.782 [2024-12-13 05:52:13.650070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.782 [2024-12-13 05:52:13.650085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.782 [2024-12-13 05:52:13.650098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.782 [2024-12-13 05:52:13.650131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.782 qpair failed and we were unable to recover it. 00:36:13.782 [2024-12-13 05:52:13.659943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.782 [2024-12-13 05:52:13.660009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.782 [2024-12-13 05:52:13.660028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.782 [2024-12-13 05:52:13.660037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.782 [2024-12-13 05:52:13.660046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.660067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:13.783 [2024-12-13 05:52:13.669943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.783 [2024-12-13 05:52:13.670003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.783 [2024-12-13 05:52:13.670016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.783 [2024-12-13 05:52:13.670023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.783 [2024-12-13 05:52:13.670030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.670045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:13.783 [2024-12-13 05:52:13.679981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.783 [2024-12-13 05:52:13.680043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.783 [2024-12-13 05:52:13.680057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.783 [2024-12-13 05:52:13.680063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.783 [2024-12-13 05:52:13.680069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.680084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:13.783 [2024-12-13 05:52:13.689963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.783 [2024-12-13 05:52:13.690020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.783 [2024-12-13 05:52:13.690033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.783 [2024-12-13 05:52:13.690039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.783 [2024-12-13 05:52:13.690045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.690059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:13.783 [2024-12-13 05:52:13.700023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.783 [2024-12-13 05:52:13.700075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.783 [2024-12-13 05:52:13.700087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.783 [2024-12-13 05:52:13.700094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.783 [2024-12-13 05:52:13.700100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.700114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:13.783 [2024-12-13 05:52:13.710068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.783 [2024-12-13 05:52:13.710140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.783 [2024-12-13 05:52:13.710153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.783 [2024-12-13 05:52:13.710159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.783 [2024-12-13 05:52:13.710165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.710179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:13.783 [2024-12-13 05:52:13.720116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.783 [2024-12-13 05:52:13.720169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.783 [2024-12-13 05:52:13.720186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.783 [2024-12-13 05:52:13.720192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.783 [2024-12-13 05:52:13.720198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.720212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:13.783 [2024-12-13 05:52:13.730126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.783 [2024-12-13 05:52:13.730182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.783 [2024-12-13 05:52:13.730195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.783 [2024-12-13 05:52:13.730202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.783 [2024-12-13 05:52:13.730208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.730222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:13.783 [2024-12-13 05:52:13.740074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.783 [2024-12-13 05:52:13.740122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.783 [2024-12-13 05:52:13.740135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.783 [2024-12-13 05:52:13.740141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.783 [2024-12-13 05:52:13.740147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.740161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:13.783 [2024-12-13 05:52:13.750101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.783 [2024-12-13 05:52:13.750157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.783 [2024-12-13 05:52:13.750169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.783 [2024-12-13 05:52:13.750175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.783 [2024-12-13 05:52:13.750181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.750195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:13.783 [2024-12-13 05:52:13.760121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.783 [2024-12-13 05:52:13.760175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.783 [2024-12-13 05:52:13.760188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.783 [2024-12-13 05:52:13.760194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.783 [2024-12-13 05:52:13.760203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.760217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:13.783 [2024-12-13 05:52:13.770206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.783 [2024-12-13 05:52:13.770261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.783 [2024-12-13 05:52:13.770273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.783 [2024-12-13 05:52:13.770279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.783 [2024-12-13 05:52:13.770285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.770299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:13.783 [2024-12-13 05:52:13.780270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.783 [2024-12-13 05:52:13.780326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.783 [2024-12-13 05:52:13.780339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.783 [2024-12-13 05:52:13.780345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.783 [2024-12-13 05:52:13.780351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.780365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:13.783 [2024-12-13 05:52:13.790265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.783 [2024-12-13 05:52:13.790323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.783 [2024-12-13 05:52:13.790335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.783 [2024-12-13 05:52:13.790342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.783 [2024-12-13 05:52:13.790347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:13.783 [2024-12-13 05:52:13.790361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.783 qpair failed and we were unable to recover it. 00:36:14.042 [2024-12-13 05:52:13.800323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.042 [2024-12-13 05:52:13.800382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.042 [2024-12-13 05:52:13.800400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.042 [2024-12-13 05:52:13.800407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.042 [2024-12-13 05:52:13.800413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.042 [2024-12-13 05:52:13.800430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.042 qpair failed and we were unable to recover it. 00:36:14.042 [2024-12-13 05:52:13.810306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.042 [2024-12-13 05:52:13.810361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.042 [2024-12-13 05:52:13.810378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.042 [2024-12-13 05:52:13.810385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.042 [2024-12-13 05:52:13.810391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.042 [2024-12-13 05:52:13.810408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.042 qpair failed and we were unable to recover it. 00:36:14.042 [2024-12-13 05:52:13.820383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.042 [2024-12-13 05:52:13.820440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.042 [2024-12-13 05:52:13.820460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.042 [2024-12-13 05:52:13.820466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.042 [2024-12-13 05:52:13.820472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.042 [2024-12-13 05:52:13.820487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.042 qpair failed and we were unable to recover it. 00:36:14.042 [2024-12-13 05:52:13.830379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.042 [2024-12-13 05:52:13.830482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.042 [2024-12-13 05:52:13.830495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.042 [2024-12-13 05:52:13.830501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.042 [2024-12-13 05:52:13.830507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.042 [2024-12-13 05:52:13.830522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.042 qpair failed and we were unable to recover it. 00:36:14.042 [2024-12-13 05:52:13.840426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.042 [2024-12-13 05:52:13.840488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.042 [2024-12-13 05:52:13.840500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.042 [2024-12-13 05:52:13.840507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.042 [2024-12-13 05:52:13.840513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.042 [2024-12-13 05:52:13.840528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.042 qpair failed and we were unable to recover it. 00:36:14.042 [2024-12-13 05:52:13.850405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.042 [2024-12-13 05:52:13.850478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.042 [2024-12-13 05:52:13.850495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.042 [2024-12-13 05:52:13.850501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.042 [2024-12-13 05:52:13.850507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.042 [2024-12-13 05:52:13.850523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.042 qpair failed and we were unable to recover it. 00:36:14.042 [2024-12-13 05:52:13.860462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.042 [2024-12-13 05:52:13.860513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.042 [2024-12-13 05:52:13.860526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.042 [2024-12-13 05:52:13.860532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.042 [2024-12-13 05:52:13.860539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.042 [2024-12-13 05:52:13.860553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.042 qpair failed and we were unable to recover it. 00:36:14.042 [2024-12-13 05:52:13.870431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.042 [2024-12-13 05:52:13.870498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.042 [2024-12-13 05:52:13.870512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.042 [2024-12-13 05:52:13.870518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.042 [2024-12-13 05:52:13.870524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.042 [2024-12-13 05:52:13.870538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.042 qpair failed and we were unable to recover it. 00:36:14.042 [2024-12-13 05:52:13.880532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.042 [2024-12-13 05:52:13.880584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.042 [2024-12-13 05:52:13.880597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.042 [2024-12-13 05:52:13.880604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.042 [2024-12-13 05:52:13.880610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.042 [2024-12-13 05:52:13.880624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.042 qpair failed and we were unable to recover it. 00:36:14.042 [2024-12-13 05:52:13.890556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.043 [2024-12-13 05:52:13.890607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.043 [2024-12-13 05:52:13.890620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.043 [2024-12-13 05:52:13.890630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.043 [2024-12-13 05:52:13.890636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.043 [2024-12-13 05:52:13.890651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.043 qpair failed and we were unable to recover it. 00:36:14.043 [2024-12-13 05:52:13.900528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.043 [2024-12-13 05:52:13.900582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.043 [2024-12-13 05:52:13.900595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.043 [2024-12-13 05:52:13.900601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.043 [2024-12-13 05:52:13.900607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.043 [2024-12-13 05:52:13.900621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.043 qpair failed and we were unable to recover it. 00:36:14.043 [2024-12-13 05:52:13.910622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.043 [2024-12-13 05:52:13.910681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.043 [2024-12-13 05:52:13.910694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.043 [2024-12-13 05:52:13.910701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.043 [2024-12-13 05:52:13.910707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.043 [2024-12-13 05:52:13.910721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.043 qpair failed and we were unable to recover it. 00:36:14.043 [2024-12-13 05:52:13.920677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.043 [2024-12-13 05:52:13.920734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.043 [2024-12-13 05:52:13.920746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.043 [2024-12-13 05:52:13.920752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.043 [2024-12-13 05:52:13.920758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.043 [2024-12-13 05:52:13.920773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.043 qpair failed and we were unable to recover it. 00:36:14.043 [2024-12-13 05:52:13.930705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.043 [2024-12-13 05:52:13.930760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.043 [2024-12-13 05:52:13.930772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.043 [2024-12-13 05:52:13.930778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.043 [2024-12-13 05:52:13.930784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.043 [2024-12-13 05:52:13.930801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.043 qpair failed and we were unable to recover it. 00:36:14.043 [2024-12-13 05:52:13.940746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.043 [2024-12-13 05:52:13.940799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.043 [2024-12-13 05:52:13.940813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.043 [2024-12-13 05:52:13.940820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.043 [2024-12-13 05:52:13.940826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.043 [2024-12-13 05:52:13.940842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.043 qpair failed and we were unable to recover it. 00:36:14.043 [2024-12-13 05:52:13.950747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.043 [2024-12-13 05:52:13.950804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.043 [2024-12-13 05:52:13.950817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.043 [2024-12-13 05:52:13.950823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.043 [2024-12-13 05:52:13.950829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.043 [2024-12-13 05:52:13.950843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.043 qpair failed and we were unable to recover it. 00:36:14.043 [2024-12-13 05:52:13.960775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.043 [2024-12-13 05:52:13.960826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.043 [2024-12-13 05:52:13.960839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.043 [2024-12-13 05:52:13.960844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.043 [2024-12-13 05:52:13.960850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.043 [2024-12-13 05:52:13.960865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.043 qpair failed and we were unable to recover it. 00:36:14.043 [2024-12-13 05:52:13.970786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.043 [2024-12-13 05:52:13.970840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.043 [2024-12-13 05:52:13.970852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.043 [2024-12-13 05:52:13.970858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.043 [2024-12-13 05:52:13.970864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.043 [2024-12-13 05:52:13.970878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.043 qpair failed and we were unable to recover it. 00:36:14.043 [2024-12-13 05:52:13.980822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.043 [2024-12-13 05:52:13.980874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.043 [2024-12-13 05:52:13.980887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.043 [2024-12-13 05:52:13.980893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.043 [2024-12-13 05:52:13.980898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.043 [2024-12-13 05:52:13.980913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.043 qpair failed and we were unable to recover it. 00:36:14.043 [2024-12-13 05:52:13.990862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.043 [2024-12-13 05:52:13.990927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.043 [2024-12-13 05:52:13.990940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.043 [2024-12-13 05:52:13.990946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.043 [2024-12-13 05:52:13.990952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.043 [2024-12-13 05:52:13.990966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.043 qpair failed and we were unable to recover it. 00:36:14.043 [2024-12-13 05:52:14.000889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.043 [2024-12-13 05:52:14.000992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.043 [2024-12-13 05:52:14.001004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.043 [2024-12-13 05:52:14.001010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.043 [2024-12-13 05:52:14.001016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.043 [2024-12-13 05:52:14.001030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.043 qpair failed and we were unable to recover it. 00:36:14.043 [2024-12-13 05:52:14.010953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.043 [2024-12-13 05:52:14.011004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.043 [2024-12-13 05:52:14.011016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.043 [2024-12-13 05:52:14.011022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.043 [2024-12-13 05:52:14.011028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.044 [2024-12-13 05:52:14.011042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.044 qpair failed and we were unable to recover it. 00:36:14.044 [2024-12-13 05:52:14.020934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.044 [2024-12-13 05:52:14.020985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.044 [2024-12-13 05:52:14.020998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.044 [2024-12-13 05:52:14.021007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.044 [2024-12-13 05:52:14.021013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.044 [2024-12-13 05:52:14.021027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.044 qpair failed and we were unable to recover it. 00:36:14.044 [2024-12-13 05:52:14.030970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.044 [2024-12-13 05:52:14.031025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.044 [2024-12-13 05:52:14.031037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.044 [2024-12-13 05:52:14.031043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.044 [2024-12-13 05:52:14.031049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.044 [2024-12-13 05:52:14.031063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.044 qpair failed and we were unable to recover it. 00:36:14.044 [2024-12-13 05:52:14.041006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.044 [2024-12-13 05:52:14.041070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.044 [2024-12-13 05:52:14.041082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.044 [2024-12-13 05:52:14.041088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.044 [2024-12-13 05:52:14.041094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.044 [2024-12-13 05:52:14.041108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.044 qpair failed and we were unable to recover it. 00:36:14.044 [2024-12-13 05:52:14.051012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.044 [2024-12-13 05:52:14.051066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.044 [2024-12-13 05:52:14.051077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.044 [2024-12-13 05:52:14.051084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.044 [2024-12-13 05:52:14.051089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.044 [2024-12-13 05:52:14.051103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.044 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-13 05:52:14.061062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.303 [2024-12-13 05:52:14.061122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.303 [2024-12-13 05:52:14.061140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.303 [2024-12-13 05:52:14.061148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.303 [2024-12-13 05:52:14.061154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.303 [2024-12-13 05:52:14.061173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-13 05:52:14.071109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.303 [2024-12-13 05:52:14.071179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.303 [2024-12-13 05:52:14.071196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.303 [2024-12-13 05:52:14.071202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.303 [2024-12-13 05:52:14.071208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.303 [2024-12-13 05:52:14.071225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-13 05:52:14.081133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.303 [2024-12-13 05:52:14.081190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.303 [2024-12-13 05:52:14.081203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.303 [2024-12-13 05:52:14.081210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.303 [2024-12-13 05:52:14.081215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.303 [2024-12-13 05:52:14.081230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-13 05:52:14.091134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.303 [2024-12-13 05:52:14.091189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.303 [2024-12-13 05:52:14.091202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.303 [2024-12-13 05:52:14.091208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.303 [2024-12-13 05:52:14.091214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.303 [2024-12-13 05:52:14.091228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-13 05:52:14.101162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.303 [2024-12-13 05:52:14.101211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.303 [2024-12-13 05:52:14.101224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.303 [2024-12-13 05:52:14.101230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.303 [2024-12-13 05:52:14.101235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.303 [2024-12-13 05:52:14.101250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-13 05:52:14.111201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.303 [2024-12-13 05:52:14.111256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.303 [2024-12-13 05:52:14.111269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.303 [2024-12-13 05:52:14.111275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.303 [2024-12-13 05:52:14.111281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.303 [2024-12-13 05:52:14.111294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-13 05:52:14.121237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.303 [2024-12-13 05:52:14.121288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.303 [2024-12-13 05:52:14.121301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.303 [2024-12-13 05:52:14.121307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.303 [2024-12-13 05:52:14.121313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.303 [2024-12-13 05:52:14.121327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-13 05:52:14.131289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.303 [2024-12-13 05:52:14.131387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.303 [2024-12-13 05:52:14.131409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.303 [2024-12-13 05:52:14.131417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.303 [2024-12-13 05:52:14.131423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.303 [2024-12-13 05:52:14.131443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-13 05:52:14.141265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.303 [2024-12-13 05:52:14.141318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.303 [2024-12-13 05:52:14.141332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.303 [2024-12-13 05:52:14.141338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.303 [2024-12-13 05:52:14.141344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.303 [2024-12-13 05:52:14.141359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-13 05:52:14.151317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.303 [2024-12-13 05:52:14.151372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.303 [2024-12-13 05:52:14.151388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.303 [2024-12-13 05:52:14.151395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.303 [2024-12-13 05:52:14.151401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.303 [2024-12-13 05:52:14.151416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-13 05:52:14.161391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.303 [2024-12-13 05:52:14.161444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.303 [2024-12-13 05:52:14.161462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.303 [2024-12-13 05:52:14.161468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.303 [2024-12-13 05:52:14.161474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.303 [2024-12-13 05:52:14.161489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-13 05:52:14.171293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.303 [2024-12-13 05:52:14.171345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.303 [2024-12-13 05:52:14.171358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.303 [2024-12-13 05:52:14.171364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.303 [2024-12-13 05:52:14.171370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.303 [2024-12-13 05:52:14.171383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-13 05:52:14.181379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.304 [2024-12-13 05:52:14.181460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.304 [2024-12-13 05:52:14.181473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.304 [2024-12-13 05:52:14.181480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.304 [2024-12-13 05:52:14.181486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.304 [2024-12-13 05:52:14.181500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-13 05:52:14.191420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.304 [2024-12-13 05:52:14.191482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.304 [2024-12-13 05:52:14.191495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.304 [2024-12-13 05:52:14.191501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.304 [2024-12-13 05:52:14.191510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.304 [2024-12-13 05:52:14.191524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-13 05:52:14.201456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.304 [2024-12-13 05:52:14.201507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.304 [2024-12-13 05:52:14.201520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.304 [2024-12-13 05:52:14.201526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.304 [2024-12-13 05:52:14.201532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.304 [2024-12-13 05:52:14.201546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-13 05:52:14.211495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.304 [2024-12-13 05:52:14.211550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.304 [2024-12-13 05:52:14.211563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.304 [2024-12-13 05:52:14.211570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.304 [2024-12-13 05:52:14.211576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.304 [2024-12-13 05:52:14.211590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-13 05:52:14.221486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.304 [2024-12-13 05:52:14.221534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.304 [2024-12-13 05:52:14.221546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.304 [2024-12-13 05:52:14.221552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.304 [2024-12-13 05:52:14.221558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.304 [2024-12-13 05:52:14.221573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-13 05:52:14.231536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.304 [2024-12-13 05:52:14.231590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.304 [2024-12-13 05:52:14.231602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.304 [2024-12-13 05:52:14.231608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.304 [2024-12-13 05:52:14.231614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.304 [2024-12-13 05:52:14.231628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-13 05:52:14.241599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.304 [2024-12-13 05:52:14.241704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.304 [2024-12-13 05:52:14.241718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.304 [2024-12-13 05:52:14.241724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.304 [2024-12-13 05:52:14.241730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.304 [2024-12-13 05:52:14.241745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-13 05:52:14.251629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.304 [2024-12-13 05:52:14.251682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.304 [2024-12-13 05:52:14.251694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.304 [2024-12-13 05:52:14.251700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.304 [2024-12-13 05:52:14.251706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.304 [2024-12-13 05:52:14.251720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-13 05:52:14.261612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.304 [2024-12-13 05:52:14.261681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.304 [2024-12-13 05:52:14.261694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.304 [2024-12-13 05:52:14.261700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.304 [2024-12-13 05:52:14.261706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.304 [2024-12-13 05:52:14.261720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-13 05:52:14.271606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.304 [2024-12-13 05:52:14.271682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.304 [2024-12-13 05:52:14.271695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.304 [2024-12-13 05:52:14.271701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.304 [2024-12-13 05:52:14.271707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.304 [2024-12-13 05:52:14.271721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-13 05:52:14.281677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.304 [2024-12-13 05:52:14.281743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.304 [2024-12-13 05:52:14.281759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.304 [2024-12-13 05:52:14.281765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.304 [2024-12-13 05:52:14.281771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.304 [2024-12-13 05:52:14.281785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-13 05:52:14.291695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.304 [2024-12-13 05:52:14.291789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.304 [2024-12-13 05:52:14.291801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.304 [2024-12-13 05:52:14.291806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.304 [2024-12-13 05:52:14.291812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.304 [2024-12-13 05:52:14.291826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-13 05:52:14.301771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.304 [2024-12-13 05:52:14.301825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.304 [2024-12-13 05:52:14.301838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.304 [2024-12-13 05:52:14.301844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.305 [2024-12-13 05:52:14.301850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.305 [2024-12-13 05:52:14.301864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-13 05:52:14.311756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.305 [2024-12-13 05:52:14.311813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.305 [2024-12-13 05:52:14.311825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.305 [2024-12-13 05:52:14.311831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.305 [2024-12-13 05:52:14.311837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.305 [2024-12-13 05:52:14.311850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.563 [2024-12-13 05:52:14.321796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.563 [2024-12-13 05:52:14.321855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.563 [2024-12-13 05:52:14.321872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.563 [2024-12-13 05:52:14.321880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.563 [2024-12-13 05:52:14.321889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.563 [2024-12-13 05:52:14.321906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.563 qpair failed and we were unable to recover it. 00:36:14.563 [2024-12-13 05:52:14.331894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.563 [2024-12-13 05:52:14.331961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.563 [2024-12-13 05:52:14.331978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.563 [2024-12-13 05:52:14.331985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.563 [2024-12-13 05:52:14.331991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.563 [2024-12-13 05:52:14.332007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.563 qpair failed and we were unable to recover it. 00:36:14.563 [2024-12-13 05:52:14.341866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.563 [2024-12-13 05:52:14.341922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.563 [2024-12-13 05:52:14.341935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.563 [2024-12-13 05:52:14.341941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.563 [2024-12-13 05:52:14.341947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.563 [2024-12-13 05:52:14.341962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.563 qpair failed and we were unable to recover it. 00:36:14.563 [2024-12-13 05:52:14.351902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.563 [2024-12-13 05:52:14.351961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.563 [2024-12-13 05:52:14.351974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.563 [2024-12-13 05:52:14.351980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.563 [2024-12-13 05:52:14.351986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.563 [2024-12-13 05:52:14.352000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.563 qpair failed and we were unable to recover it. 00:36:14.563 [2024-12-13 05:52:14.361945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.563 [2024-12-13 05:52:14.361999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.563 [2024-12-13 05:52:14.362012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.563 [2024-12-13 05:52:14.362018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.362024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.362038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.371922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.564 [2024-12-13 05:52:14.371974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.564 [2024-12-13 05:52:14.371986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.564 [2024-12-13 05:52:14.371992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.371998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.372013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.382016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.564 [2024-12-13 05:52:14.382065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.564 [2024-12-13 05:52:14.382077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.564 [2024-12-13 05:52:14.382084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.382090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.382104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.391986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.564 [2024-12-13 05:52:14.392048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.564 [2024-12-13 05:52:14.392061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.564 [2024-12-13 05:52:14.392067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.392073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.392087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.402060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.564 [2024-12-13 05:52:14.402115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.564 [2024-12-13 05:52:14.402127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.564 [2024-12-13 05:52:14.402133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.402139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.402154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.412044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.564 [2024-12-13 05:52:14.412130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.564 [2024-12-13 05:52:14.412145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.564 [2024-12-13 05:52:14.412151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.412157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.412172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.422068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.564 [2024-12-13 05:52:14.422122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.564 [2024-12-13 05:52:14.422135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.564 [2024-12-13 05:52:14.422142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.422148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.422161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.432025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.564 [2024-12-13 05:52:14.432081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.564 [2024-12-13 05:52:14.432094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.564 [2024-12-13 05:52:14.432100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.432106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.432120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.442113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.564 [2024-12-13 05:52:14.442167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.564 [2024-12-13 05:52:14.442180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.564 [2024-12-13 05:52:14.442186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.442192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.442205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.452070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.564 [2024-12-13 05:52:14.452174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.564 [2024-12-13 05:52:14.452186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.564 [2024-12-13 05:52:14.452195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.452201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.452216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.462116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.564 [2024-12-13 05:52:14.462204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.564 [2024-12-13 05:52:14.462216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.564 [2024-12-13 05:52:14.462222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.462227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.462241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.472210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.564 [2024-12-13 05:52:14.472268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.564 [2024-12-13 05:52:14.472280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.564 [2024-12-13 05:52:14.472287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.472292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.472306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.482255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.564 [2024-12-13 05:52:14.482314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.564 [2024-12-13 05:52:14.482327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.564 [2024-12-13 05:52:14.482334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.482339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.482353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.492264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.564 [2024-12-13 05:52:14.492313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.564 [2024-12-13 05:52:14.492325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.564 [2024-12-13 05:52:14.492331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.564 [2024-12-13 05:52:14.492337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.564 [2024-12-13 05:52:14.492355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.564 qpair failed and we were unable to recover it. 00:36:14.564 [2024-12-13 05:52:14.502288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.565 [2024-12-13 05:52:14.502335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.565 [2024-12-13 05:52:14.502348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.565 [2024-12-13 05:52:14.502354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.565 [2024-12-13 05:52:14.502360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.565 [2024-12-13 05:52:14.502374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.565 qpair failed and we were unable to recover it. 00:36:14.565 [2024-12-13 05:52:14.512353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.565 [2024-12-13 05:52:14.512471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.565 [2024-12-13 05:52:14.512484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.565 [2024-12-13 05:52:14.512490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.565 [2024-12-13 05:52:14.512495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.565 [2024-12-13 05:52:14.512509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.565 qpair failed and we were unable to recover it. 00:36:14.565 [2024-12-13 05:52:14.522345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.565 [2024-12-13 05:52:14.522401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.565 [2024-12-13 05:52:14.522414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.565 [2024-12-13 05:52:14.522420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.565 [2024-12-13 05:52:14.522425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.565 [2024-12-13 05:52:14.522440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.565 qpair failed and we were unable to recover it. 00:36:14.565 [2024-12-13 05:52:14.532380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.565 [2024-12-13 05:52:14.532430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.565 [2024-12-13 05:52:14.532443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.565 [2024-12-13 05:52:14.532452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.565 [2024-12-13 05:52:14.532458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.565 [2024-12-13 05:52:14.532472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.565 qpair failed and we were unable to recover it. 00:36:14.565 [2024-12-13 05:52:14.542343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.565 [2024-12-13 05:52:14.542396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.565 [2024-12-13 05:52:14.542409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.565 [2024-12-13 05:52:14.542415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.565 [2024-12-13 05:52:14.542421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.565 [2024-12-13 05:52:14.542434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.565 qpair failed and we were unable to recover it. 00:36:14.565 [2024-12-13 05:52:14.552444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.565 [2024-12-13 05:52:14.552504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.565 [2024-12-13 05:52:14.552516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.565 [2024-12-13 05:52:14.552522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.565 [2024-12-13 05:52:14.552528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.565 [2024-12-13 05:52:14.552542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.565 qpair failed and we were unable to recover it. 00:36:14.565 [2024-12-13 05:52:14.562480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.565 [2024-12-13 05:52:14.562547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.565 [2024-12-13 05:52:14.562559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.565 [2024-12-13 05:52:14.562566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.565 [2024-12-13 05:52:14.562571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.565 [2024-12-13 05:52:14.562585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.565 qpair failed and we were unable to recover it. 00:36:14.565 [2024-12-13 05:52:14.572488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.565 [2024-12-13 05:52:14.572545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.565 [2024-12-13 05:52:14.572557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.565 [2024-12-13 05:52:14.572564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.565 [2024-12-13 05:52:14.572570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.565 [2024-12-13 05:52:14.572584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.565 qpair failed and we were unable to recover it. 00:36:14.823 [2024-12-13 05:52:14.582568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.823 [2024-12-13 05:52:14.582670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.823 [2024-12-13 05:52:14.582688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.823 [2024-12-13 05:52:14.582697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.823 [2024-12-13 05:52:14.582703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.823 [2024-12-13 05:52:14.582720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.823 qpair failed and we were unable to recover it. 00:36:14.823 [2024-12-13 05:52:14.592564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.823 [2024-12-13 05:52:14.592622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.823 [2024-12-13 05:52:14.592638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.823 [2024-12-13 05:52:14.592644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.823 [2024-12-13 05:52:14.592650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.823 [2024-12-13 05:52:14.592666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.823 qpair failed and we were unable to recover it. 00:36:14.823 [2024-12-13 05:52:14.602599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.823 [2024-12-13 05:52:14.602654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.823 [2024-12-13 05:52:14.602667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.823 [2024-12-13 05:52:14.602673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.823 [2024-12-13 05:52:14.602679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.602693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.824 [2024-12-13 05:52:14.612579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.824 [2024-12-13 05:52:14.612672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.824 [2024-12-13 05:52:14.612685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.824 [2024-12-13 05:52:14.612691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.824 [2024-12-13 05:52:14.612697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.612711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.824 [2024-12-13 05:52:14.622643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.824 [2024-12-13 05:52:14.622698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.824 [2024-12-13 05:52:14.622712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.824 [2024-12-13 05:52:14.622718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.824 [2024-12-13 05:52:14.622724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.622744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.824 [2024-12-13 05:52:14.632697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.824 [2024-12-13 05:52:14.632753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.824 [2024-12-13 05:52:14.632766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.824 [2024-12-13 05:52:14.632772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.824 [2024-12-13 05:52:14.632778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.632793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.824 [2024-12-13 05:52:14.642714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.824 [2024-12-13 05:52:14.642769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.824 [2024-12-13 05:52:14.642782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.824 [2024-12-13 05:52:14.642788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.824 [2024-12-13 05:52:14.642795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.642810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.824 [2024-12-13 05:52:14.652748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.824 [2024-12-13 05:52:14.652808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.824 [2024-12-13 05:52:14.652821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.824 [2024-12-13 05:52:14.652828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.824 [2024-12-13 05:52:14.652834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.652849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.824 [2024-12-13 05:52:14.662762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.824 [2024-12-13 05:52:14.662814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.824 [2024-12-13 05:52:14.662826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.824 [2024-12-13 05:52:14.662832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.824 [2024-12-13 05:52:14.662838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.662853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.824 [2024-12-13 05:52:14.672806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.824 [2024-12-13 05:52:14.672861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.824 [2024-12-13 05:52:14.672874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.824 [2024-12-13 05:52:14.672880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.824 [2024-12-13 05:52:14.672886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.672900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.824 [2024-12-13 05:52:14.682829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.824 [2024-12-13 05:52:14.682886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.824 [2024-12-13 05:52:14.682899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.824 [2024-12-13 05:52:14.682905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.824 [2024-12-13 05:52:14.682911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.682925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.824 [2024-12-13 05:52:14.692871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.824 [2024-12-13 05:52:14.692952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.824 [2024-12-13 05:52:14.692965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.824 [2024-12-13 05:52:14.692972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.824 [2024-12-13 05:52:14.692977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.692992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.824 [2024-12-13 05:52:14.702892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.824 [2024-12-13 05:52:14.702941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.824 [2024-12-13 05:52:14.702952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.824 [2024-12-13 05:52:14.702959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.824 [2024-12-13 05:52:14.702964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.702978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.824 [2024-12-13 05:52:14.712976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.824 [2024-12-13 05:52:14.713082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.824 [2024-12-13 05:52:14.713097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.824 [2024-12-13 05:52:14.713103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.824 [2024-12-13 05:52:14.713109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.713123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.824 [2024-12-13 05:52:14.722947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.824 [2024-12-13 05:52:14.723002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.824 [2024-12-13 05:52:14.723014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.824 [2024-12-13 05:52:14.723020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.824 [2024-12-13 05:52:14.723026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.723040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.824 [2024-12-13 05:52:14.732962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.824 [2024-12-13 05:52:14.733014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.824 [2024-12-13 05:52:14.733025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.824 [2024-12-13 05:52:14.733031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.824 [2024-12-13 05:52:14.733037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.824 [2024-12-13 05:52:14.733051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.824 qpair failed and we were unable to recover it. 00:36:14.825 [2024-12-13 05:52:14.742990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.825 [2024-12-13 05:52:14.743038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.825 [2024-12-13 05:52:14.743050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.825 [2024-12-13 05:52:14.743056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.825 [2024-12-13 05:52:14.743062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.825 [2024-12-13 05:52:14.743076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.825 qpair failed and we were unable to recover it. 00:36:14.825 [2024-12-13 05:52:14.753035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.825 [2024-12-13 05:52:14.753091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.825 [2024-12-13 05:52:14.753104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.825 [2024-12-13 05:52:14.753110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.825 [2024-12-13 05:52:14.753118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.825 [2024-12-13 05:52:14.753132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.825 qpair failed and we were unable to recover it. 00:36:14.825 [2024-12-13 05:52:14.763055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.825 [2024-12-13 05:52:14.763109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.825 [2024-12-13 05:52:14.763121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.825 [2024-12-13 05:52:14.763128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.825 [2024-12-13 05:52:14.763134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.825 [2024-12-13 05:52:14.763147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.825 qpair failed and we were unable to recover it. 00:36:14.825 [2024-12-13 05:52:14.773083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.825 [2024-12-13 05:52:14.773139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.825 [2024-12-13 05:52:14.773152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.825 [2024-12-13 05:52:14.773158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.825 [2024-12-13 05:52:14.773164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.825 [2024-12-13 05:52:14.773178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.825 qpair failed and we were unable to recover it. 00:36:14.825 [2024-12-13 05:52:14.783109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.825 [2024-12-13 05:52:14.783163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.825 [2024-12-13 05:52:14.783175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.825 [2024-12-13 05:52:14.783181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.825 [2024-12-13 05:52:14.783187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.825 [2024-12-13 05:52:14.783201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.825 qpair failed and we were unable to recover it. 00:36:14.825 [2024-12-13 05:52:14.793144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.825 [2024-12-13 05:52:14.793198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.825 [2024-12-13 05:52:14.793210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.825 [2024-12-13 05:52:14.793216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.825 [2024-12-13 05:52:14.793222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.825 [2024-12-13 05:52:14.793236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.825 qpair failed and we were unable to recover it. 00:36:14.825 [2024-12-13 05:52:14.803168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.825 [2024-12-13 05:52:14.803219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.825 [2024-12-13 05:52:14.803232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.825 [2024-12-13 05:52:14.803238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.825 [2024-12-13 05:52:14.803244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.825 [2024-12-13 05:52:14.803258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.825 qpair failed and we were unable to recover it. 00:36:14.825 [2024-12-13 05:52:14.813187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.825 [2024-12-13 05:52:14.813256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.825 [2024-12-13 05:52:14.813268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.825 [2024-12-13 05:52:14.813275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.825 [2024-12-13 05:52:14.813280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.825 [2024-12-13 05:52:14.813294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.825 qpair failed and we were unable to recover it. 00:36:14.825 [2024-12-13 05:52:14.823211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.825 [2024-12-13 05:52:14.823264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.825 [2024-12-13 05:52:14.823277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.825 [2024-12-13 05:52:14.823283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.825 [2024-12-13 05:52:14.823289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.825 [2024-12-13 05:52:14.823303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.825 qpair failed and we were unable to recover it. 00:36:14.825 [2024-12-13 05:52:14.833250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.825 [2024-12-13 05:52:14.833307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.825 [2024-12-13 05:52:14.833319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.825 [2024-12-13 05:52:14.833325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.825 [2024-12-13 05:52:14.833331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:14.825 [2024-12-13 05:52:14.833345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.825 qpair failed and we were unable to recover it. 00:36:15.084 [2024-12-13 05:52:14.843322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.084 [2024-12-13 05:52:14.843379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.084 [2024-12-13 05:52:14.843401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.084 [2024-12-13 05:52:14.843409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.084 [2024-12-13 05:52:14.843416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.084 [2024-12-13 05:52:14.843434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.084 qpair failed and we were unable to recover it. 00:36:15.084 [2024-12-13 05:52:14.853302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.084 [2024-12-13 05:52:14.853354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.084 [2024-12-13 05:52:14.853367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.084 [2024-12-13 05:52:14.853373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.084 [2024-12-13 05:52:14.853380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.084 [2024-12-13 05:52:14.853394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.084 qpair failed and we were unable to recover it. 00:36:15.084 [2024-12-13 05:52:14.863372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.084 [2024-12-13 05:52:14.863425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.084 [2024-12-13 05:52:14.863438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.084 [2024-12-13 05:52:14.863444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.084 [2024-12-13 05:52:14.863454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.084 [2024-12-13 05:52:14.863469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.084 qpair failed and we were unable to recover it. 00:36:15.084 [2024-12-13 05:52:14.873360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.084 [2024-12-13 05:52:14.873413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.084 [2024-12-13 05:52:14.873427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.084 [2024-12-13 05:52:14.873433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.084 [2024-12-13 05:52:14.873439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.084 [2024-12-13 05:52:14.873457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.084 qpair failed and we were unable to recover it. 00:36:15.084 [2024-12-13 05:52:14.883394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.084 [2024-12-13 05:52:14.883451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.084 [2024-12-13 05:52:14.883464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.084 [2024-12-13 05:52:14.883470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.084 [2024-12-13 05:52:14.883479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.084 [2024-12-13 05:52:14.883492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.084 qpair failed and we were unable to recover it. 00:36:15.084 [2024-12-13 05:52:14.893451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.084 [2024-12-13 05:52:14.893516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.084 [2024-12-13 05:52:14.893529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.084 [2024-12-13 05:52:14.893535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.084 [2024-12-13 05:52:14.893541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.084 [2024-12-13 05:52:14.893555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.084 qpair failed and we were unable to recover it. 00:36:15.084 [2024-12-13 05:52:14.903484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.084 [2024-12-13 05:52:14.903533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.084 [2024-12-13 05:52:14.903546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.084 [2024-12-13 05:52:14.903552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.084 [2024-12-13 05:52:14.903557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.084 [2024-12-13 05:52:14.903572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.084 qpair failed and we were unable to recover it. 00:36:15.084 [2024-12-13 05:52:14.913485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.084 [2024-12-13 05:52:14.913544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.084 [2024-12-13 05:52:14.913557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.084 [2024-12-13 05:52:14.913563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.084 [2024-12-13 05:52:14.913568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.084 [2024-12-13 05:52:14.913582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.084 qpair failed and we were unable to recover it. 00:36:15.084 [2024-12-13 05:52:14.923513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.084 [2024-12-13 05:52:14.923610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.084 [2024-12-13 05:52:14.923623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.084 [2024-12-13 05:52:14.923629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.084 [2024-12-13 05:52:14.923635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.084 [2024-12-13 05:52:14.923649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.084 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:14.933560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:14.933615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.085 [2024-12-13 05:52:14.933628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.085 [2024-12-13 05:52:14.933634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.085 [2024-12-13 05:52:14.933640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.085 [2024-12-13 05:52:14.933655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.085 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:14.943496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:14.943548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.085 [2024-12-13 05:52:14.943561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.085 [2024-12-13 05:52:14.943567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.085 [2024-12-13 05:52:14.943574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.085 [2024-12-13 05:52:14.943588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.085 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:14.953615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:14.953679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.085 [2024-12-13 05:52:14.953693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.085 [2024-12-13 05:52:14.953700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.085 [2024-12-13 05:52:14.953706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.085 [2024-12-13 05:52:14.953720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.085 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:14.963631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:14.963705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.085 [2024-12-13 05:52:14.963718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.085 [2024-12-13 05:52:14.963724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.085 [2024-12-13 05:52:14.963730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.085 [2024-12-13 05:52:14.963744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.085 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:14.973664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:14.973715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.085 [2024-12-13 05:52:14.973731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.085 [2024-12-13 05:52:14.973737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.085 [2024-12-13 05:52:14.973743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.085 [2024-12-13 05:52:14.973757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.085 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:14.983627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:14.983682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.085 [2024-12-13 05:52:14.983694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.085 [2024-12-13 05:52:14.983700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.085 [2024-12-13 05:52:14.983706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.085 [2024-12-13 05:52:14.983721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.085 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:14.993724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:14.993781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.085 [2024-12-13 05:52:14.993794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.085 [2024-12-13 05:52:14.993800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.085 [2024-12-13 05:52:14.993806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.085 [2024-12-13 05:52:14.993820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.085 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:15.003769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:15.003825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.085 [2024-12-13 05:52:15.003837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.085 [2024-12-13 05:52:15.003843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.085 [2024-12-13 05:52:15.003849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.085 [2024-12-13 05:52:15.003863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.085 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:15.013752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:15.013808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.085 [2024-12-13 05:52:15.013820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.085 [2024-12-13 05:52:15.013829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.085 [2024-12-13 05:52:15.013835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.085 [2024-12-13 05:52:15.013849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.085 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:15.023723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:15.023778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.085 [2024-12-13 05:52:15.023790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.085 [2024-12-13 05:52:15.023796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.085 [2024-12-13 05:52:15.023802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.085 [2024-12-13 05:52:15.023816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.085 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:15.033813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:15.033867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.085 [2024-12-13 05:52:15.033880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.085 [2024-12-13 05:52:15.033886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.085 [2024-12-13 05:52:15.033891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.085 [2024-12-13 05:52:15.033905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.085 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:15.043842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:15.043899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.085 [2024-12-13 05:52:15.043911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.085 [2024-12-13 05:52:15.043918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.085 [2024-12-13 05:52:15.043924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.085 [2024-12-13 05:52:15.043937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.085 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:15.053815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:15.053868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.085 [2024-12-13 05:52:15.053880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.085 [2024-12-13 05:52:15.053886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.085 [2024-12-13 05:52:15.053892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.085 [2024-12-13 05:52:15.053909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.085 qpair failed and we were unable to recover it. 00:36:15.085 [2024-12-13 05:52:15.063884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.085 [2024-12-13 05:52:15.063942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.086 [2024-12-13 05:52:15.063955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.086 [2024-12-13 05:52:15.063962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.086 [2024-12-13 05:52:15.063968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.086 [2024-12-13 05:52:15.063982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.086 qpair failed and we were unable to recover it. 00:36:15.086 [2024-12-13 05:52:15.073914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.086 [2024-12-13 05:52:15.073969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.086 [2024-12-13 05:52:15.073981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.086 [2024-12-13 05:52:15.073988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.086 [2024-12-13 05:52:15.073994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.086 [2024-12-13 05:52:15.074008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.086 qpair failed and we were unable to recover it. 00:36:15.086 [2024-12-13 05:52:15.083957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.086 [2024-12-13 05:52:15.084015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.086 [2024-12-13 05:52:15.084028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.086 [2024-12-13 05:52:15.084034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.086 [2024-12-13 05:52:15.084040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.086 [2024-12-13 05:52:15.084054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.086 qpair failed and we were unable to recover it. 00:36:15.086 [2024-12-13 05:52:15.093970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.086 [2024-12-13 05:52:15.094056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.086 [2024-12-13 05:52:15.094068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.086 [2024-12-13 05:52:15.094074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.086 [2024-12-13 05:52:15.094080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.086 [2024-12-13 05:52:15.094094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.086 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.104019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.104129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.104146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.104153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.104159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.104176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.113979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.114037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.114050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.114057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.114063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.114077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.124074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.124126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.124139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.124145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.124151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.124165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.134078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.134130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.134142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.134148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.134154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.134169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.144059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.144107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.144119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.144128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.144134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.144149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.154091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.154187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.154199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.154205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.154211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.154225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.164210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.164276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.164288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.164295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.164301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.164314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.174129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.174183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.174196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.174202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.174208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.174222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.184264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.184311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.184323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.184329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.184335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.184354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.194246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.194303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.194315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.194321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.194327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.194341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.204292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.204347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.204360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.204366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.204372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.204386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.214314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.214364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.214376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.214383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.214388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.214403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.224315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.224369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.224382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.224388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.224394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.224409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.234358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.234411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.234424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.234430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.234436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.234463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.244415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.344 [2024-12-13 05:52:15.244471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.344 [2024-12-13 05:52:15.244484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.344 [2024-12-13 05:52:15.244490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.344 [2024-12-13 05:52:15.244496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.344 [2024-12-13 05:52:15.244510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.344 qpair failed and we were unable to recover it. 00:36:15.344 [2024-12-13 05:52:15.254407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.345 [2024-12-13 05:52:15.254462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.345 [2024-12-13 05:52:15.254474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.345 [2024-12-13 05:52:15.254480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.345 [2024-12-13 05:52:15.254486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.345 [2024-12-13 05:52:15.254500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.345 qpair failed and we were unable to recover it. 00:36:15.345 [2024-12-13 05:52:15.264476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.345 [2024-12-13 05:52:15.264534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.345 [2024-12-13 05:52:15.264547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.345 [2024-12-13 05:52:15.264553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.345 [2024-12-13 05:52:15.264559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.345 [2024-12-13 05:52:15.264573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.345 qpair failed and we were unable to recover it. 00:36:15.345 [2024-12-13 05:52:15.274406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.345 [2024-12-13 05:52:15.274465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.345 [2024-12-13 05:52:15.274481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.345 [2024-12-13 05:52:15.274487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.345 [2024-12-13 05:52:15.274493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.345 [2024-12-13 05:52:15.274507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.345 qpair failed and we were unable to recover it. 00:36:15.345 [2024-12-13 05:52:15.284499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.345 [2024-12-13 05:52:15.284551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.345 [2024-12-13 05:52:15.284564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.345 [2024-12-13 05:52:15.284570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.345 [2024-12-13 05:52:15.284576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.345 [2024-12-13 05:52:15.284590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.345 qpair failed and we were unable to recover it. 00:36:15.345 [2024-12-13 05:52:15.294524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.345 [2024-12-13 05:52:15.294595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.345 [2024-12-13 05:52:15.294608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.345 [2024-12-13 05:52:15.294614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.345 [2024-12-13 05:52:15.294619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.345 [2024-12-13 05:52:15.294633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.345 qpair failed and we were unable to recover it. 00:36:15.345 [2024-12-13 05:52:15.304545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.345 [2024-12-13 05:52:15.304599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.345 [2024-12-13 05:52:15.304611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.345 [2024-12-13 05:52:15.304617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.345 [2024-12-13 05:52:15.304623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.345 [2024-12-13 05:52:15.304637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.345 qpair failed and we were unable to recover it. 00:36:15.345 [2024-12-13 05:52:15.314622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.345 [2024-12-13 05:52:15.314678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.345 [2024-12-13 05:52:15.314690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.345 [2024-12-13 05:52:15.314696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.345 [2024-12-13 05:52:15.314707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.345 [2024-12-13 05:52:15.314721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.345 qpair failed and we were unable to recover it. 00:36:15.345 [2024-12-13 05:52:15.324589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.345 [2024-12-13 05:52:15.324642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.345 [2024-12-13 05:52:15.324655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.345 [2024-12-13 05:52:15.324661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.345 [2024-12-13 05:52:15.324666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.345 [2024-12-13 05:52:15.324681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.345 qpair failed and we were unable to recover it. 00:36:15.345 [2024-12-13 05:52:15.334614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.345 [2024-12-13 05:52:15.334669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.345 [2024-12-13 05:52:15.334681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.345 [2024-12-13 05:52:15.334688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.345 [2024-12-13 05:52:15.334694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.345 [2024-12-13 05:52:15.334708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.345 qpair failed and we were unable to recover it. 00:36:15.345 [2024-12-13 05:52:15.344665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.345 [2024-12-13 05:52:15.344720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.345 [2024-12-13 05:52:15.344733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.345 [2024-12-13 05:52:15.344739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.345 [2024-12-13 05:52:15.344745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.345 [2024-12-13 05:52:15.344759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.345 qpair failed and we were unable to recover it. 00:36:15.345 [2024-12-13 05:52:15.354641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.345 [2024-12-13 05:52:15.354705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.345 [2024-12-13 05:52:15.354717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.345 [2024-12-13 05:52:15.354723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.345 [2024-12-13 05:52:15.354729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.345 [2024-12-13 05:52:15.354742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.345 qpair failed and we were unable to recover it. 00:36:15.602 [2024-12-13 05:52:15.364759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.602 [2024-12-13 05:52:15.364816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.602 [2024-12-13 05:52:15.364834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.602 [2024-12-13 05:52:15.364841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.602 [2024-12-13 05:52:15.364847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.602 [2024-12-13 05:52:15.364864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.602 qpair failed and we were unable to recover it. 00:36:15.602 [2024-12-13 05:52:15.374752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.602 [2024-12-13 05:52:15.374806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.602 [2024-12-13 05:52:15.374820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.602 [2024-12-13 05:52:15.374826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.602 [2024-12-13 05:52:15.374832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.602 [2024-12-13 05:52:15.374847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.602 qpair failed and we were unable to recover it. 00:36:15.602 [2024-12-13 05:52:15.384774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.602 [2024-12-13 05:52:15.384824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.602 [2024-12-13 05:52:15.384836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.602 [2024-12-13 05:52:15.384843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.602 [2024-12-13 05:52:15.384849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.602 [2024-12-13 05:52:15.384864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.602 qpair failed and we were unable to recover it. 00:36:15.602 [2024-12-13 05:52:15.394811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.602 [2024-12-13 05:52:15.394867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.602 [2024-12-13 05:52:15.394879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.602 [2024-12-13 05:52:15.394885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.602 [2024-12-13 05:52:15.394891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.602 [2024-12-13 05:52:15.394905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.404843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.404906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.603 [2024-12-13 05:52:15.404922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.603 [2024-12-13 05:52:15.404928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.603 [2024-12-13 05:52:15.404934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.603 [2024-12-13 05:52:15.404948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.414867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.414921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.603 [2024-12-13 05:52:15.414934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.603 [2024-12-13 05:52:15.414941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.603 [2024-12-13 05:52:15.414946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.603 [2024-12-13 05:52:15.414961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.424893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.424945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.603 [2024-12-13 05:52:15.424957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.603 [2024-12-13 05:52:15.424963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.603 [2024-12-13 05:52:15.424969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.603 [2024-12-13 05:52:15.424984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.434938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.435012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.603 [2024-12-13 05:52:15.435025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.603 [2024-12-13 05:52:15.435031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.603 [2024-12-13 05:52:15.435037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.603 [2024-12-13 05:52:15.435052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.444949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.445003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.603 [2024-12-13 05:52:15.445015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.603 [2024-12-13 05:52:15.445021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.603 [2024-12-13 05:52:15.445030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.603 [2024-12-13 05:52:15.445044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.455033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.455126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.603 [2024-12-13 05:52:15.455139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.603 [2024-12-13 05:52:15.455147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.603 [2024-12-13 05:52:15.455152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.603 [2024-12-13 05:52:15.455167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.465002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.465053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.603 [2024-12-13 05:52:15.465065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.603 [2024-12-13 05:52:15.465071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.603 [2024-12-13 05:52:15.465077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.603 [2024-12-13 05:52:15.465091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.475046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.475099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.603 [2024-12-13 05:52:15.475112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.603 [2024-12-13 05:52:15.475118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.603 [2024-12-13 05:52:15.475124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.603 [2024-12-13 05:52:15.475138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.485063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.485118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.603 [2024-12-13 05:52:15.485131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.603 [2024-12-13 05:52:15.485137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.603 [2024-12-13 05:52:15.485143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.603 [2024-12-13 05:52:15.485157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.495129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.495188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.603 [2024-12-13 05:52:15.495200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.603 [2024-12-13 05:52:15.495207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.603 [2024-12-13 05:52:15.495212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.603 [2024-12-13 05:52:15.495226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.505101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.505160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.603 [2024-12-13 05:52:15.505172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.603 [2024-12-13 05:52:15.505179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.603 [2024-12-13 05:52:15.505185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.603 [2024-12-13 05:52:15.505199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.515137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.515194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.603 [2024-12-13 05:52:15.515207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.603 [2024-12-13 05:52:15.515214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.603 [2024-12-13 05:52:15.515220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.603 [2024-12-13 05:52:15.515234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.525176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.525239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.603 [2024-12-13 05:52:15.525252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.603 [2024-12-13 05:52:15.525258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.603 [2024-12-13 05:52:15.525264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.603 [2024-12-13 05:52:15.525278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.603 qpair failed and we were unable to recover it. 00:36:15.603 [2024-12-13 05:52:15.535185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.603 [2024-12-13 05:52:15.535240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.604 [2024-12-13 05:52:15.535255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.604 [2024-12-13 05:52:15.535261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.604 [2024-12-13 05:52:15.535267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.604 [2024-12-13 05:52:15.535281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.604 qpair failed and we were unable to recover it. 00:36:15.604 [2024-12-13 05:52:15.545263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.604 [2024-12-13 05:52:15.545313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.604 [2024-12-13 05:52:15.545326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.604 [2024-12-13 05:52:15.545332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.604 [2024-12-13 05:52:15.545338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.604 [2024-12-13 05:52:15.545352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.604 qpair failed and we were unable to recover it. 00:36:15.604 [2024-12-13 05:52:15.555255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.604 [2024-12-13 05:52:15.555323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.604 [2024-12-13 05:52:15.555336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.604 [2024-12-13 05:52:15.555342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.604 [2024-12-13 05:52:15.555348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.604 [2024-12-13 05:52:15.555362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.604 qpair failed and we were unable to recover it. 00:36:15.604 [2024-12-13 05:52:15.565303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.604 [2024-12-13 05:52:15.565357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.604 [2024-12-13 05:52:15.565370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.604 [2024-12-13 05:52:15.565376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.604 [2024-12-13 05:52:15.565382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.604 [2024-12-13 05:52:15.565395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.604 qpair failed and we were unable to recover it. 00:36:15.604 [2024-12-13 05:52:15.575339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.604 [2024-12-13 05:52:15.575386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.604 [2024-12-13 05:52:15.575399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.604 [2024-12-13 05:52:15.575408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.604 [2024-12-13 05:52:15.575414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.604 [2024-12-13 05:52:15.575427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.604 qpair failed and we were unable to recover it. 00:36:15.604 [2024-12-13 05:52:15.585332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.604 [2024-12-13 05:52:15.585387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.604 [2024-12-13 05:52:15.585399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.604 [2024-12-13 05:52:15.585405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.604 [2024-12-13 05:52:15.585411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.604 [2024-12-13 05:52:15.585425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.604 qpair failed and we were unable to recover it. 00:36:15.604 [2024-12-13 05:52:15.595367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.604 [2024-12-13 05:52:15.595420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.604 [2024-12-13 05:52:15.595433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.604 [2024-12-13 05:52:15.595439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.604 [2024-12-13 05:52:15.595445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.604 [2024-12-13 05:52:15.595463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.604 qpair failed and we were unable to recover it. 00:36:15.604 [2024-12-13 05:52:15.605402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.604 [2024-12-13 05:52:15.605458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.604 [2024-12-13 05:52:15.605470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.604 [2024-12-13 05:52:15.605477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.604 [2024-12-13 05:52:15.605482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.604 [2024-12-13 05:52:15.605496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.604 qpair failed and we were unable to recover it. 00:36:15.604 [2024-12-13 05:52:15.615441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.604 [2024-12-13 05:52:15.615498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.604 [2024-12-13 05:52:15.615514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.604 [2024-12-13 05:52:15.615521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.604 [2024-12-13 05:52:15.615527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.604 [2024-12-13 05:52:15.615546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.604 qpair failed and we were unable to recover it. 00:36:15.862 [2024-12-13 05:52:15.625441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.862 [2024-12-13 05:52:15.625502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.862 [2024-12-13 05:52:15.625520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.625527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.625533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.863 [2024-12-13 05:52:15.625550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.863 qpair failed and we were unable to recover it. 00:36:15.863 [2024-12-13 05:52:15.635511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.863 [2024-12-13 05:52:15.635569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.863 [2024-12-13 05:52:15.635583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.635589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.635595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.863 [2024-12-13 05:52:15.635610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.863 qpair failed and we were unable to recover it. 00:36:15.863 [2024-12-13 05:52:15.645570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.863 [2024-12-13 05:52:15.645628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.863 [2024-12-13 05:52:15.645641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.645648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.645653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.863 [2024-12-13 05:52:15.645668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.863 qpair failed and we were unable to recover it. 00:36:15.863 [2024-12-13 05:52:15.655534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.863 [2024-12-13 05:52:15.655591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.863 [2024-12-13 05:52:15.655604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.655611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.655617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.863 [2024-12-13 05:52:15.655631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.863 qpair failed and we were unable to recover it. 00:36:15.863 [2024-12-13 05:52:15.665551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.863 [2024-12-13 05:52:15.665608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.863 [2024-12-13 05:52:15.665621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.665628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.665633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.863 [2024-12-13 05:52:15.665648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.863 qpair failed and we were unable to recover it. 00:36:15.863 [2024-12-13 05:52:15.675586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.863 [2024-12-13 05:52:15.675641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.863 [2024-12-13 05:52:15.675654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.675660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.675666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.863 [2024-12-13 05:52:15.675680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.863 qpair failed and we were unable to recover it. 00:36:15.863 [2024-12-13 05:52:15.685621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.863 [2024-12-13 05:52:15.685677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.863 [2024-12-13 05:52:15.685689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.685695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.685701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.863 [2024-12-13 05:52:15.685716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.863 qpair failed and we were unable to recover it. 00:36:15.863 [2024-12-13 05:52:15.695577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.863 [2024-12-13 05:52:15.695634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.863 [2024-12-13 05:52:15.695647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.695653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.695659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.863 [2024-12-13 05:52:15.695673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.863 qpair failed and we were unable to recover it. 00:36:15.863 [2024-12-13 05:52:15.705684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.863 [2024-12-13 05:52:15.705740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.863 [2024-12-13 05:52:15.705752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.705761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.705767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.863 [2024-12-13 05:52:15.705781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.863 qpair failed and we were unable to recover it. 00:36:15.863 [2024-12-13 05:52:15.715721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.863 [2024-12-13 05:52:15.715786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.863 [2024-12-13 05:52:15.715798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.715805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.715810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.863 [2024-12-13 05:52:15.715824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.863 qpair failed and we were unable to recover it. 00:36:15.863 [2024-12-13 05:52:15.725743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.863 [2024-12-13 05:52:15.725796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.863 [2024-12-13 05:52:15.725808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.725814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.725821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.863 [2024-12-13 05:52:15.725835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.863 qpair failed and we were unable to recover it. 00:36:15.863 [2024-12-13 05:52:15.735734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.863 [2024-12-13 05:52:15.735790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.863 [2024-12-13 05:52:15.735802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.735809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.735815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.863 [2024-12-13 05:52:15.735829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.863 qpair failed and we were unable to recover it. 00:36:15.863 [2024-12-13 05:52:15.745785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.863 [2024-12-13 05:52:15.745833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.863 [2024-12-13 05:52:15.745845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.745851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.745857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.863 [2024-12-13 05:52:15.745874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.863 qpair failed and we were unable to recover it. 00:36:15.863 [2024-12-13 05:52:15.755816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.863 [2024-12-13 05:52:15.755874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.863 [2024-12-13 05:52:15.755887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.863 [2024-12-13 05:52:15.755893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.863 [2024-12-13 05:52:15.755899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.864 [2024-12-13 05:52:15.755913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.864 qpair failed and we were unable to recover it. 00:36:15.864 [2024-12-13 05:52:15.765866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.864 [2024-12-13 05:52:15.765932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.864 [2024-12-13 05:52:15.765944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.864 [2024-12-13 05:52:15.765950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.864 [2024-12-13 05:52:15.765955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.864 [2024-12-13 05:52:15.765969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.864 qpair failed and we were unable to recover it. 00:36:15.864 [2024-12-13 05:52:15.775824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.864 [2024-12-13 05:52:15.775903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.864 [2024-12-13 05:52:15.775915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.864 [2024-12-13 05:52:15.775921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.864 [2024-12-13 05:52:15.775927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.864 [2024-12-13 05:52:15.775941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.864 qpair failed and we were unable to recover it. 00:36:15.864 [2024-12-13 05:52:15.785879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.864 [2024-12-13 05:52:15.785975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.864 [2024-12-13 05:52:15.785988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.864 [2024-12-13 05:52:15.785993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.864 [2024-12-13 05:52:15.785999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.864 [2024-12-13 05:52:15.786013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.864 qpair failed and we were unable to recover it. 00:36:15.864 [2024-12-13 05:52:15.795946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.864 [2024-12-13 05:52:15.796001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.864 [2024-12-13 05:52:15.796014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.864 [2024-12-13 05:52:15.796020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.864 [2024-12-13 05:52:15.796026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.864 [2024-12-13 05:52:15.796040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.864 qpair failed and we were unable to recover it. 00:36:15.864 [2024-12-13 05:52:15.806006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.864 [2024-12-13 05:52:15.806063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.864 [2024-12-13 05:52:15.806075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.864 [2024-12-13 05:52:15.806081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.864 [2024-12-13 05:52:15.806087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.864 [2024-12-13 05:52:15.806100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.864 qpair failed and we were unable to recover it. 00:36:15.864 [2024-12-13 05:52:15.816056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.864 [2024-12-13 05:52:15.816108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.864 [2024-12-13 05:52:15.816121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.864 [2024-12-13 05:52:15.816127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.864 [2024-12-13 05:52:15.816132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.864 [2024-12-13 05:52:15.816147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.864 qpair failed and we were unable to recover it. 00:36:15.864 [2024-12-13 05:52:15.826023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.864 [2024-12-13 05:52:15.826077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.864 [2024-12-13 05:52:15.826089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.864 [2024-12-13 05:52:15.826095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.864 [2024-12-13 05:52:15.826101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.864 [2024-12-13 05:52:15.826115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.864 qpair failed and we were unable to recover it. 00:36:15.864 [2024-12-13 05:52:15.836051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.864 [2024-12-13 05:52:15.836105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.864 [2024-12-13 05:52:15.836121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.864 [2024-12-13 05:52:15.836127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.864 [2024-12-13 05:52:15.836133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.864 [2024-12-13 05:52:15.836147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.864 qpair failed and we were unable to recover it. 00:36:15.864 [2024-12-13 05:52:15.846075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.864 [2024-12-13 05:52:15.846129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.864 [2024-12-13 05:52:15.846141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.864 [2024-12-13 05:52:15.846147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.864 [2024-12-13 05:52:15.846153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.864 [2024-12-13 05:52:15.846168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.864 qpair failed and we were unable to recover it. 00:36:15.864 [2024-12-13 05:52:15.856110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.864 [2024-12-13 05:52:15.856160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.864 [2024-12-13 05:52:15.856178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.864 [2024-12-13 05:52:15.856185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.864 [2024-12-13 05:52:15.856191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.864 [2024-12-13 05:52:15.856210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.864 qpair failed and we were unable to recover it. 00:36:15.864 [2024-12-13 05:52:15.866199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.864 [2024-12-13 05:52:15.866302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.864 [2024-12-13 05:52:15.866315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.864 [2024-12-13 05:52:15.866322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.864 [2024-12-13 05:52:15.866328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.864 [2024-12-13 05:52:15.866342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.864 qpair failed and we were unable to recover it. 00:36:15.864 [2024-12-13 05:52:15.876172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.864 [2024-12-13 05:52:15.876227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.864 [2024-12-13 05:52:15.876243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.864 [2024-12-13 05:52:15.876250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.864 [2024-12-13 05:52:15.876259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:15.864 [2024-12-13 05:52:15.876276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.864 qpair failed and we were unable to recover it. 00:36:16.123 [2024-12-13 05:52:15.886220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.123 [2024-12-13 05:52:15.886274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.123 [2024-12-13 05:52:15.886291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.123 [2024-12-13 05:52:15.886298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.123 [2024-12-13 05:52:15.886304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.123 [2024-12-13 05:52:15.886321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.123 qpair failed and we were unable to recover it. 00:36:16.123 [2024-12-13 05:52:15.896236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.123 [2024-12-13 05:52:15.896308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.123 [2024-12-13 05:52:15.896321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.123 [2024-12-13 05:52:15.896328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.123 [2024-12-13 05:52:15.896334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.123 [2024-12-13 05:52:15.896349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.123 qpair failed and we were unable to recover it. 00:36:16.123 [2024-12-13 05:52:15.906312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.123 [2024-12-13 05:52:15.906364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.123 [2024-12-13 05:52:15.906377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.123 [2024-12-13 05:52:15.906383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.123 [2024-12-13 05:52:15.906389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.123 [2024-12-13 05:52:15.906404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.123 qpair failed and we were unable to recover it. 00:36:16.123 [2024-12-13 05:52:15.916308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.123 [2024-12-13 05:52:15.916363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.123 [2024-12-13 05:52:15.916376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.123 [2024-12-13 05:52:15.916382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.123 [2024-12-13 05:52:15.916388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.123 [2024-12-13 05:52:15.916402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.123 qpair failed and we were unable to recover it. 00:36:16.123 [2024-12-13 05:52:15.926240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.123 [2024-12-13 05:52:15.926304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.123 [2024-12-13 05:52:15.926318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.123 [2024-12-13 05:52:15.926324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.123 [2024-12-13 05:52:15.926331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.123 [2024-12-13 05:52:15.926345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.123 qpair failed and we were unable to recover it. 00:36:16.123 [2024-12-13 05:52:15.936337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.123 [2024-12-13 05:52:15.936386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.123 [2024-12-13 05:52:15.936399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.123 [2024-12-13 05:52:15.936405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.123 [2024-12-13 05:52:15.936411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.123 [2024-12-13 05:52:15.936426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.123 qpair failed and we were unable to recover it. 00:36:16.123 [2024-12-13 05:52:15.946406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.123 [2024-12-13 05:52:15.946467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.123 [2024-12-13 05:52:15.946480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.123 [2024-12-13 05:52:15.946486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.123 [2024-12-13 05:52:15.946493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.123 [2024-12-13 05:52:15.946507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.123 qpair failed and we were unable to recover it. 00:36:16.123 [2024-12-13 05:52:15.956397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.123 [2024-12-13 05:52:15.956457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.123 [2024-12-13 05:52:15.956470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.123 [2024-12-13 05:52:15.956477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.123 [2024-12-13 05:52:15.956483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.123 [2024-12-13 05:52:15.956497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.123 qpair failed and we were unable to recover it. 00:36:16.123 [2024-12-13 05:52:15.966438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.123 [2024-12-13 05:52:15.966506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.123 [2024-12-13 05:52:15.966522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.123 [2024-12-13 05:52:15.966528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.123 [2024-12-13 05:52:15.966534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.123 [2024-12-13 05:52:15.966548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.123 qpair failed and we were unable to recover it. 00:36:16.123 [2024-12-13 05:52:15.976450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.123 [2024-12-13 05:52:15.976505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.123 [2024-12-13 05:52:15.976518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.123 [2024-12-13 05:52:15.976524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.123 [2024-12-13 05:52:15.976530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.123 [2024-12-13 05:52:15.976544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.123 qpair failed and we were unable to recover it. 00:36:16.123 [2024-12-13 05:52:15.986532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.123 [2024-12-13 05:52:15.986582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.123 [2024-12-13 05:52:15.986595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.123 [2024-12-13 05:52:15.986601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.123 [2024-12-13 05:52:15.986607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.123 [2024-12-13 05:52:15.986621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.123 qpair failed and we were unable to recover it. 00:36:16.123 [2024-12-13 05:52:15.996508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.123 [2024-12-13 05:52:15.996564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.123 [2024-12-13 05:52:15.996577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.123 [2024-12-13 05:52:15.996583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.123 [2024-12-13 05:52:15.996589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:15.996603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.124 [2024-12-13 05:52:16.006528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.124 [2024-12-13 05:52:16.006612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.124 [2024-12-13 05:52:16.006624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.124 [2024-12-13 05:52:16.006630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.124 [2024-12-13 05:52:16.006640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:16.006654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.124 [2024-12-13 05:52:16.016554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.124 [2024-12-13 05:52:16.016608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.124 [2024-12-13 05:52:16.016621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.124 [2024-12-13 05:52:16.016627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.124 [2024-12-13 05:52:16.016633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:16.016647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.124 [2024-12-13 05:52:16.026643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.124 [2024-12-13 05:52:16.026698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.124 [2024-12-13 05:52:16.026712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.124 [2024-12-13 05:52:16.026719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.124 [2024-12-13 05:52:16.026725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:16.026739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.124 [2024-12-13 05:52:16.036641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.124 [2024-12-13 05:52:16.036693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.124 [2024-12-13 05:52:16.036706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.124 [2024-12-13 05:52:16.036712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.124 [2024-12-13 05:52:16.036718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:16.036732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.124 [2024-12-13 05:52:16.046643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.124 [2024-12-13 05:52:16.046703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.124 [2024-12-13 05:52:16.046716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.124 [2024-12-13 05:52:16.046722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.124 [2024-12-13 05:52:16.046728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:16.046742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.124 [2024-12-13 05:52:16.056664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.124 [2024-12-13 05:52:16.056722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.124 [2024-12-13 05:52:16.056734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.124 [2024-12-13 05:52:16.056740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.124 [2024-12-13 05:52:16.056746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:16.056760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.124 [2024-12-13 05:52:16.066625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.124 [2024-12-13 05:52:16.066674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.124 [2024-12-13 05:52:16.066686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.124 [2024-12-13 05:52:16.066692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.124 [2024-12-13 05:52:16.066698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:16.066712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.124 [2024-12-13 05:52:16.076737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.124 [2024-12-13 05:52:16.076789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.124 [2024-12-13 05:52:16.076802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.124 [2024-12-13 05:52:16.076809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.124 [2024-12-13 05:52:16.076814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:16.076829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.124 [2024-12-13 05:52:16.086753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.124 [2024-12-13 05:52:16.086829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.124 [2024-12-13 05:52:16.086841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.124 [2024-12-13 05:52:16.086848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.124 [2024-12-13 05:52:16.086854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:16.086868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.124 [2024-12-13 05:52:16.096801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.124 [2024-12-13 05:52:16.096856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.124 [2024-12-13 05:52:16.096868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.124 [2024-12-13 05:52:16.096874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.124 [2024-12-13 05:52:16.096880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:16.096894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.124 [2024-12-13 05:52:16.106777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.124 [2024-12-13 05:52:16.106863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.124 [2024-12-13 05:52:16.106875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.124 [2024-12-13 05:52:16.106881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.124 [2024-12-13 05:52:16.106887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:16.106900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.124 [2024-12-13 05:52:16.116876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.124 [2024-12-13 05:52:16.116975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.124 [2024-12-13 05:52:16.116988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.124 [2024-12-13 05:52:16.116994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.124 [2024-12-13 05:52:16.117000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:16.117014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.124 [2024-12-13 05:52:16.126861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.124 [2024-12-13 05:52:16.126937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.124 [2024-12-13 05:52:16.126950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.124 [2024-12-13 05:52:16.126957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.124 [2024-12-13 05:52:16.126962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.124 [2024-12-13 05:52:16.126976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.124 qpair failed and we were unable to recover it. 00:36:16.125 [2024-12-13 05:52:16.136885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.125 [2024-12-13 05:52:16.136938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.125 [2024-12-13 05:52:16.136955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.125 [2024-12-13 05:52:16.136965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.125 [2024-12-13 05:52:16.136971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.125 [2024-12-13 05:52:16.136987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.125 qpair failed and we were unable to recover it. 00:36:16.383 [2024-12-13 05:52:16.146853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.383 [2024-12-13 05:52:16.146907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.383 [2024-12-13 05:52:16.146923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.383 [2024-12-13 05:52:16.146930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.383 [2024-12-13 05:52:16.146936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.383 [2024-12-13 05:52:16.146953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-12-13 05:52:16.156951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.383 [2024-12-13 05:52:16.157004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.383 [2024-12-13 05:52:16.157018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.383 [2024-12-13 05:52:16.157024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.383 [2024-12-13 05:52:16.157030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.157045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.166993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.384 [2024-12-13 05:52:16.167047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.384 [2024-12-13 05:52:16.167060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.384 [2024-12-13 05:52:16.167066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.384 [2024-12-13 05:52:16.167072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.167086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.176998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.384 [2024-12-13 05:52:16.177054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.384 [2024-12-13 05:52:16.177067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.384 [2024-12-13 05:52:16.177073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.384 [2024-12-13 05:52:16.177079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.177096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.187047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.384 [2024-12-13 05:52:16.187103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.384 [2024-12-13 05:52:16.187116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.384 [2024-12-13 05:52:16.187123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.384 [2024-12-13 05:52:16.187128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.187143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.197137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.384 [2024-12-13 05:52:16.197223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.384 [2024-12-13 05:52:16.197235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.384 [2024-12-13 05:52:16.197242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.384 [2024-12-13 05:52:16.197247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.197261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.207083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.384 [2024-12-13 05:52:16.207140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.384 [2024-12-13 05:52:16.207153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.384 [2024-12-13 05:52:16.207159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.384 [2024-12-13 05:52:16.207165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.207179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.217110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.384 [2024-12-13 05:52:16.217159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.384 [2024-12-13 05:52:16.217171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.384 [2024-12-13 05:52:16.217177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.384 [2024-12-13 05:52:16.217183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.217197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.227186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.384 [2024-12-13 05:52:16.227242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.384 [2024-12-13 05:52:16.227255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.384 [2024-12-13 05:52:16.227261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.384 [2024-12-13 05:52:16.227267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.227280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.237177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.384 [2024-12-13 05:52:16.237243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.384 [2024-12-13 05:52:16.237256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.384 [2024-12-13 05:52:16.237262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.384 [2024-12-13 05:52:16.237268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.237283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.247203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.384 [2024-12-13 05:52:16.247255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.384 [2024-12-13 05:52:16.247267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.384 [2024-12-13 05:52:16.247273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.384 [2024-12-13 05:52:16.247280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.247293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.257242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.384 [2024-12-13 05:52:16.257294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.384 [2024-12-13 05:52:16.257307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.384 [2024-12-13 05:52:16.257313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.384 [2024-12-13 05:52:16.257319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.257333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.267308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.384 [2024-12-13 05:52:16.267408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.384 [2024-12-13 05:52:16.267421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.384 [2024-12-13 05:52:16.267430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.384 [2024-12-13 05:52:16.267436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.267454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.277338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.384 [2024-12-13 05:52:16.277440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.384 [2024-12-13 05:52:16.277456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.384 [2024-12-13 05:52:16.277462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.384 [2024-12-13 05:52:16.277467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.277481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.287373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.384 [2024-12-13 05:52:16.287457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.384 [2024-12-13 05:52:16.287470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.384 [2024-12-13 05:52:16.287476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.384 [2024-12-13 05:52:16.287482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.384 [2024-12-13 05:52:16.287496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-12-13 05:52:16.297329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.385 [2024-12-13 05:52:16.297384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.385 [2024-12-13 05:52:16.297397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.385 [2024-12-13 05:52:16.297403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.385 [2024-12-13 05:52:16.297409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.385 [2024-12-13 05:52:16.297423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-12-13 05:52:16.307364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.385 [2024-12-13 05:52:16.307421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.385 [2024-12-13 05:52:16.307434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.385 [2024-12-13 05:52:16.307440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.385 [2024-12-13 05:52:16.307445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.385 [2024-12-13 05:52:16.307466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-12-13 05:52:16.317408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.385 [2024-12-13 05:52:16.317484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.385 [2024-12-13 05:52:16.317497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.385 [2024-12-13 05:52:16.317504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.385 [2024-12-13 05:52:16.317509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.385 [2024-12-13 05:52:16.317524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-12-13 05:52:16.327415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.385 [2024-12-13 05:52:16.327480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.385 [2024-12-13 05:52:16.327493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.385 [2024-12-13 05:52:16.327499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.385 [2024-12-13 05:52:16.327505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.385 [2024-12-13 05:52:16.327519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-12-13 05:52:16.337528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.385 [2024-12-13 05:52:16.337586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.385 [2024-12-13 05:52:16.337598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.385 [2024-12-13 05:52:16.337604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.385 [2024-12-13 05:52:16.337610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.385 [2024-12-13 05:52:16.337624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-12-13 05:52:16.347513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.385 [2024-12-13 05:52:16.347565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.385 [2024-12-13 05:52:16.347577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.385 [2024-12-13 05:52:16.347584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.385 [2024-12-13 05:52:16.347589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.385 [2024-12-13 05:52:16.347603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-12-13 05:52:16.357552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.385 [2024-12-13 05:52:16.357606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.385 [2024-12-13 05:52:16.357619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.385 [2024-12-13 05:52:16.357625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.385 [2024-12-13 05:52:16.357630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.385 [2024-12-13 05:52:16.357645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-12-13 05:52:16.367517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.385 [2024-12-13 05:52:16.367572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.385 [2024-12-13 05:52:16.367584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.385 [2024-12-13 05:52:16.367590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.385 [2024-12-13 05:52:16.367596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.385 [2024-12-13 05:52:16.367610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-12-13 05:52:16.377584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.385 [2024-12-13 05:52:16.377665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.385 [2024-12-13 05:52:16.377677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.385 [2024-12-13 05:52:16.377683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.385 [2024-12-13 05:52:16.377688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.385 [2024-12-13 05:52:16.377701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-12-13 05:52:16.387563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.385 [2024-12-13 05:52:16.387649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.385 [2024-12-13 05:52:16.387662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.385 [2024-12-13 05:52:16.387668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.385 [2024-12-13 05:52:16.387674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.385 [2024-12-13 05:52:16.387688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-12-13 05:52:16.397617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.385 [2024-12-13 05:52:16.397719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.385 [2024-12-13 05:52:16.397740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.385 [2024-12-13 05:52:16.397747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.385 [2024-12-13 05:52:16.397752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.385 [2024-12-13 05:52:16.397769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.644 [2024-12-13 05:52:16.407662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.644 [2024-12-13 05:52:16.407721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.644 [2024-12-13 05:52:16.407738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.644 [2024-12-13 05:52:16.407745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.644 [2024-12-13 05:52:16.407751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.644 [2024-12-13 05:52:16.407768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.644 qpair failed and we were unable to recover it. 00:36:16.644 [2024-12-13 05:52:16.417676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.644 [2024-12-13 05:52:16.417766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.644 [2024-12-13 05:52:16.417779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.644 [2024-12-13 05:52:16.417786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.644 [2024-12-13 05:52:16.417791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.644 [2024-12-13 05:52:16.417806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.644 qpair failed and we were unable to recover it. 00:36:16.644 [2024-12-13 05:52:16.427636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.644 [2024-12-13 05:52:16.427691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.644 [2024-12-13 05:52:16.427704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.644 [2024-12-13 05:52:16.427710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.644 [2024-12-13 05:52:16.427716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.644 [2024-12-13 05:52:16.427731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.644 qpair failed and we were unable to recover it. 00:36:16.644 [2024-12-13 05:52:16.437768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.644 [2024-12-13 05:52:16.437838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.644 [2024-12-13 05:52:16.437852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.644 [2024-12-13 05:52:16.437858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.644 [2024-12-13 05:52:16.437867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.644 [2024-12-13 05:52:16.437881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.644 qpair failed and we were unable to recover it. 00:36:16.644 [2024-12-13 05:52:16.447767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.644 [2024-12-13 05:52:16.447821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.644 [2024-12-13 05:52:16.447834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.644 [2024-12-13 05:52:16.447840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.644 [2024-12-13 05:52:16.447846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.644 [2024-12-13 05:52:16.447860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.644 qpair failed and we were unable to recover it. 00:36:16.644 [2024-12-13 05:52:16.457719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.644 [2024-12-13 05:52:16.457772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.644 [2024-12-13 05:52:16.457785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.644 [2024-12-13 05:52:16.457791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.644 [2024-12-13 05:52:16.457797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.644 [2024-12-13 05:52:16.457811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.644 qpair failed and we were unable to recover it. 00:36:16.644 [2024-12-13 05:52:16.467817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.644 [2024-12-13 05:52:16.467864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.644 [2024-12-13 05:52:16.467876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.644 [2024-12-13 05:52:16.467882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.644 [2024-12-13 05:52:16.467888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.644 [2024-12-13 05:52:16.467902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.644 qpair failed and we were unable to recover it. 00:36:16.644 [2024-12-13 05:52:16.477876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.644 [2024-12-13 05:52:16.477934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.644 [2024-12-13 05:52:16.477946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.644 [2024-12-13 05:52:16.477953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.645 [2024-12-13 05:52:16.477958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.645 [2024-12-13 05:52:16.477973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.645 qpair failed and we were unable to recover it. 00:36:16.645 [2024-12-13 05:52:16.487911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.645 [2024-12-13 05:52:16.487966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.645 [2024-12-13 05:52:16.487979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.645 [2024-12-13 05:52:16.487985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.645 [2024-12-13 05:52:16.487991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.645 [2024-12-13 05:52:16.488005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.645 qpair failed and we were unable to recover it. 00:36:16.645 [2024-12-13 05:52:16.497874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.645 [2024-12-13 05:52:16.497929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.645 [2024-12-13 05:52:16.497942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.645 [2024-12-13 05:52:16.497948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.645 [2024-12-13 05:52:16.497954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.645 [2024-12-13 05:52:16.497968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.645 qpair failed and we were unable to recover it. 00:36:16.645 [2024-12-13 05:52:16.507899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.645 [2024-12-13 05:52:16.507957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.645 [2024-12-13 05:52:16.507970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.645 [2024-12-13 05:52:16.507976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.645 [2024-12-13 05:52:16.507982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.645 [2024-12-13 05:52:16.507996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.645 qpair failed and we were unable to recover it. 00:36:16.645 [2024-12-13 05:52:16.517893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.645 [2024-12-13 05:52:16.517947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.645 [2024-12-13 05:52:16.517959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.645 [2024-12-13 05:52:16.517965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.645 [2024-12-13 05:52:16.517971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.645 [2024-12-13 05:52:16.517985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.645 qpair failed and we were unable to recover it. 00:36:16.645 [2024-12-13 05:52:16.527909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.645 [2024-12-13 05:52:16.527964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.645 [2024-12-13 05:52:16.527981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.645 [2024-12-13 05:52:16.527987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.645 [2024-12-13 05:52:16.527992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.645 [2024-12-13 05:52:16.528007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.645 qpair failed and we were unable to recover it. 00:36:16.645 [2024-12-13 05:52:16.537917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.645 [2024-12-13 05:52:16.537968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.645 [2024-12-13 05:52:16.537981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.645 [2024-12-13 05:52:16.537987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.645 [2024-12-13 05:52:16.537993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.645 [2024-12-13 05:52:16.538007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.645 qpair failed and we were unable to recover it. 00:36:16.645 [2024-12-13 05:52:16.548019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.645 [2024-12-13 05:52:16.548070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.645 [2024-12-13 05:52:16.548082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.645 [2024-12-13 05:52:16.548089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.645 [2024-12-13 05:52:16.548094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.645 [2024-12-13 05:52:16.548108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.645 qpair failed and we were unable to recover it. 00:36:16.645 [2024-12-13 05:52:16.558050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.645 [2024-12-13 05:52:16.558110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.645 [2024-12-13 05:52:16.558122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.645 [2024-12-13 05:52:16.558129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.645 [2024-12-13 05:52:16.558135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.645 [2024-12-13 05:52:16.558148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.645 qpair failed and we were unable to recover it. 00:36:16.645 [2024-12-13 05:52:16.568067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.645 [2024-12-13 05:52:16.568120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.645 [2024-12-13 05:52:16.568132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.645 [2024-12-13 05:52:16.568138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.645 [2024-12-13 05:52:16.568147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.645 [2024-12-13 05:52:16.568161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.645 qpair failed and we were unable to recover it. 00:36:16.645 [2024-12-13 05:52:16.578128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.645 [2024-12-13 05:52:16.578177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.645 [2024-12-13 05:52:16.578189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.645 [2024-12-13 05:52:16.578195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.645 [2024-12-13 05:52:16.578201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.645 [2024-12-13 05:52:16.578216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.645 qpair failed and we were unable to recover it. 00:36:16.645 [2024-12-13 05:52:16.588166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.645 [2024-12-13 05:52:16.588217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.645 [2024-12-13 05:52:16.588229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.645 [2024-12-13 05:52:16.588235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.645 [2024-12-13 05:52:16.588241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.645 [2024-12-13 05:52:16.588255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.645 qpair failed and we were unable to recover it. 00:36:16.645 [2024-12-13 05:52:16.598168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.645 [2024-12-13 05:52:16.598223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.645 [2024-12-13 05:52:16.598235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.645 [2024-12-13 05:52:16.598241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.645 [2024-12-13 05:52:16.598247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.645 [2024-12-13 05:52:16.598261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.645 qpair failed and we were unable to recover it. 00:36:16.645 [2024-12-13 05:52:16.608244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.645 [2024-12-13 05:52:16.608310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.645 [2024-12-13 05:52:16.608322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.646 [2024-12-13 05:52:16.608328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.646 [2024-12-13 05:52:16.608334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.646 [2024-12-13 05:52:16.608347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.646 qpair failed and we were unable to recover it. 00:36:16.646 [2024-12-13 05:52:16.618214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.646 [2024-12-13 05:52:16.618270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.646 [2024-12-13 05:52:16.618283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.646 [2024-12-13 05:52:16.618290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.646 [2024-12-13 05:52:16.618295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.646 [2024-12-13 05:52:16.618309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.646 qpair failed and we were unable to recover it. 00:36:16.646 [2024-12-13 05:52:16.628257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.646 [2024-12-13 05:52:16.628310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.646 [2024-12-13 05:52:16.628323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.646 [2024-12-13 05:52:16.628329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.646 [2024-12-13 05:52:16.628335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.646 [2024-12-13 05:52:16.628349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.646 qpair failed and we were unable to recover it. 00:36:16.646 [2024-12-13 05:52:16.638281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.646 [2024-12-13 05:52:16.638339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.646 [2024-12-13 05:52:16.638351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.646 [2024-12-13 05:52:16.638357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.646 [2024-12-13 05:52:16.638363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.646 [2024-12-13 05:52:16.638377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.646 qpair failed and we were unable to recover it. 00:36:16.646 [2024-12-13 05:52:16.648398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.646 [2024-12-13 05:52:16.648457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.646 [2024-12-13 05:52:16.648470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.646 [2024-12-13 05:52:16.648476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.646 [2024-12-13 05:52:16.648481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.646 [2024-12-13 05:52:16.648495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.646 qpair failed and we were unable to recover it. 00:36:16.646 [2024-12-13 05:52:16.658399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.646 [2024-12-13 05:52:16.658469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.905 [2024-12-13 05:52:16.658490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.905 [2024-12-13 05:52:16.658502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.905 [2024-12-13 05:52:16.658511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.905 [2024-12-13 05:52:16.658539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-12-13 05:52:16.668379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.905 [2024-12-13 05:52:16.668436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.905 [2024-12-13 05:52:16.668457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.905 [2024-12-13 05:52:16.668464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.905 [2024-12-13 05:52:16.668470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.905 [2024-12-13 05:52:16.668486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-12-13 05:52:16.678436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.905 [2024-12-13 05:52:16.678500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.905 [2024-12-13 05:52:16.678514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.905 [2024-12-13 05:52:16.678521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.905 [2024-12-13 05:52:16.678526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.905 [2024-12-13 05:52:16.678541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-12-13 05:52:16.688482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.905 [2024-12-13 05:52:16.688545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.905 [2024-12-13 05:52:16.688558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.905 [2024-12-13 05:52:16.688564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.905 [2024-12-13 05:52:16.688570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.905 [2024-12-13 05:52:16.688585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-12-13 05:52:16.698408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.905 [2024-12-13 05:52:16.698500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.905 [2024-12-13 05:52:16.698513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.905 [2024-12-13 05:52:16.698522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.905 [2024-12-13 05:52:16.698528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.905 [2024-12-13 05:52:16.698542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-12-13 05:52:16.708509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.905 [2024-12-13 05:52:16.708564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.905 [2024-12-13 05:52:16.708577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.905 [2024-12-13 05:52:16.708584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.905 [2024-12-13 05:52:16.708590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.905 [2024-12-13 05:52:16.708605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-12-13 05:52:16.718541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.905 [2024-12-13 05:52:16.718596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.905 [2024-12-13 05:52:16.718608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.905 [2024-12-13 05:52:16.718615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.905 [2024-12-13 05:52:16.718620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.905 [2024-12-13 05:52:16.718635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-12-13 05:52:16.728590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.905 [2024-12-13 05:52:16.728649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.905 [2024-12-13 05:52:16.728661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.905 [2024-12-13 05:52:16.728668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.905 [2024-12-13 05:52:16.728674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.905 [2024-12-13 05:52:16.728688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-12-13 05:52:16.738539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.905 [2024-12-13 05:52:16.738591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.905 [2024-12-13 05:52:16.738604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.905 [2024-12-13 05:52:16.738610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.905 [2024-12-13 05:52:16.738616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.905 [2024-12-13 05:52:16.738633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.905 [2024-12-13 05:52:16.748640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.905 [2024-12-13 05:52:16.748707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.905 [2024-12-13 05:52:16.748721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.905 [2024-12-13 05:52:16.748727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.905 [2024-12-13 05:52:16.748732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.905 [2024-12-13 05:52:16.748748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.905 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.758585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.758639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.906 [2024-12-13 05:52:16.758651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.906 [2024-12-13 05:52:16.758658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.906 [2024-12-13 05:52:16.758663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.906 [2024-12-13 05:52:16.758677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.768630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.768716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.906 [2024-12-13 05:52:16.768728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.906 [2024-12-13 05:52:16.768735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.906 [2024-12-13 05:52:16.768740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.906 [2024-12-13 05:52:16.768754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.778626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.778681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.906 [2024-12-13 05:52:16.778694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.906 [2024-12-13 05:52:16.778700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.906 [2024-12-13 05:52:16.778707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.906 [2024-12-13 05:52:16.778721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.788741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.788796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.906 [2024-12-13 05:52:16.788809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.906 [2024-12-13 05:52:16.788815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.906 [2024-12-13 05:52:16.788821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.906 [2024-12-13 05:52:16.788836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.798776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.798832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.906 [2024-12-13 05:52:16.798845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.906 [2024-12-13 05:52:16.798852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.906 [2024-12-13 05:52:16.798858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.906 [2024-12-13 05:52:16.798872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.808848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.808905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.906 [2024-12-13 05:52:16.808917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.906 [2024-12-13 05:52:16.808924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.906 [2024-12-13 05:52:16.808929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.906 [2024-12-13 05:52:16.808944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.818831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.818881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.906 [2024-12-13 05:52:16.818893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.906 [2024-12-13 05:52:16.818900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.906 [2024-12-13 05:52:16.818906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.906 [2024-12-13 05:52:16.818920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.828769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.828825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.906 [2024-12-13 05:52:16.828843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.906 [2024-12-13 05:52:16.828849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.906 [2024-12-13 05:52:16.828855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.906 [2024-12-13 05:52:16.828869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.838811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.838908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.906 [2024-12-13 05:52:16.838921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.906 [2024-12-13 05:52:16.838927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.906 [2024-12-13 05:52:16.838933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.906 [2024-12-13 05:52:16.838947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.848945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.849008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.906 [2024-12-13 05:52:16.849020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.906 [2024-12-13 05:52:16.849026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.906 [2024-12-13 05:52:16.849032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.906 [2024-12-13 05:52:16.849046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.858858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.858922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.906 [2024-12-13 05:52:16.858934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.906 [2024-12-13 05:52:16.858940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.906 [2024-12-13 05:52:16.858946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.906 [2024-12-13 05:52:16.858960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.868966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.869015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.906 [2024-12-13 05:52:16.869028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.906 [2024-12-13 05:52:16.869034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.906 [2024-12-13 05:52:16.869040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.906 [2024-12-13 05:52:16.869056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.879001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.879055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.906 [2024-12-13 05:52:16.879067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.906 [2024-12-13 05:52:16.879073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.906 [2024-12-13 05:52:16.879079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.906 [2024-12-13 05:52:16.879093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.906 qpair failed and we were unable to recover it. 00:36:16.906 [2024-12-13 05:52:16.889022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.906 [2024-12-13 05:52:16.889087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.907 [2024-12-13 05:52:16.889099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.907 [2024-12-13 05:52:16.889105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.907 [2024-12-13 05:52:16.889110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.907 [2024-12-13 05:52:16.889125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-12-13 05:52:16.899048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.907 [2024-12-13 05:52:16.899097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.907 [2024-12-13 05:52:16.899109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.907 [2024-12-13 05:52:16.899115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.907 [2024-12-13 05:52:16.899121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.907 [2024-12-13 05:52:16.899135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-12-13 05:52:16.909078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.907 [2024-12-13 05:52:16.909130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.907 [2024-12-13 05:52:16.909142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.907 [2024-12-13 05:52:16.909148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.907 [2024-12-13 05:52:16.909154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.907 [2024-12-13 05:52:16.909167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.907 qpair failed and we were unable to recover it. 00:36:16.907 [2024-12-13 05:52:16.919049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.907 [2024-12-13 05:52:16.919103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.907 [2024-12-13 05:52:16.919120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.907 [2024-12-13 05:52:16.919127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.907 [2024-12-13 05:52:16.919135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:16.907 [2024-12-13 05:52:16.919157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.907 qpair failed and we were unable to recover it. 00:36:17.165 [2024-12-13 05:52:16.929148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.165 [2024-12-13 05:52:16.929208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.165 [2024-12-13 05:52:16.929225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.165 [2024-12-13 05:52:16.929232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.165 [2024-12-13 05:52:16.929237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.165 [2024-12-13 05:52:16.929253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.165 qpair failed and we were unable to recover it. 00:36:17.165 [2024-12-13 05:52:16.939177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.165 [2024-12-13 05:52:16.939237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.165 [2024-12-13 05:52:16.939250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.165 [2024-12-13 05:52:16.939257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.165 [2024-12-13 05:52:16.939262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.165 [2024-12-13 05:52:16.939277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.165 qpair failed and we were unable to recover it. 00:36:17.165 [2024-12-13 05:52:16.949193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.165 [2024-12-13 05:52:16.949246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.165 [2024-12-13 05:52:16.949259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.165 [2024-12-13 05:52:16.949265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.165 [2024-12-13 05:52:16.949271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.165 [2024-12-13 05:52:16.949286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.165 qpair failed and we were unable to recover it. 00:36:17.165 [2024-12-13 05:52:16.959284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.165 [2024-12-13 05:52:16.959391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.165 [2024-12-13 05:52:16.959407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.165 [2024-12-13 05:52:16.959413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:16.959419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:16.959434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:16.969253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.166 [2024-12-13 05:52:16.969308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.166 [2024-12-13 05:52:16.969320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.166 [2024-12-13 05:52:16.969326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:16.969332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:16.969347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:16.979327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.166 [2024-12-13 05:52:16.979385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.166 [2024-12-13 05:52:16.979398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.166 [2024-12-13 05:52:16.979404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:16.979410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:16.979424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:16.989366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.166 [2024-12-13 05:52:16.989422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.166 [2024-12-13 05:52:16.989435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.166 [2024-12-13 05:52:16.989441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:16.989451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:16.989466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:16.999331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.166 [2024-12-13 05:52:16.999390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.166 [2024-12-13 05:52:16.999402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.166 [2024-12-13 05:52:16.999409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:16.999417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:16.999431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:17.009385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.166 [2024-12-13 05:52:17.009445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.166 [2024-12-13 05:52:17.009461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.166 [2024-12-13 05:52:17.009468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:17.009474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:17.009488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:17.019401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.166 [2024-12-13 05:52:17.019456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.166 [2024-12-13 05:52:17.019469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.166 [2024-12-13 05:52:17.019475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:17.019481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:17.019495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:17.029417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.166 [2024-12-13 05:52:17.029468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.166 [2024-12-13 05:52:17.029480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.166 [2024-12-13 05:52:17.029487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:17.029492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:17.029506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:17.039473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.166 [2024-12-13 05:52:17.039578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.166 [2024-12-13 05:52:17.039590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.166 [2024-12-13 05:52:17.039596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:17.039602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:17.039616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:17.049421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.166 [2024-12-13 05:52:17.049511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.166 [2024-12-13 05:52:17.049524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.166 [2024-12-13 05:52:17.049530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:17.049536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:17.049550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:17.059509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.166 [2024-12-13 05:52:17.059563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.166 [2024-12-13 05:52:17.059576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.166 [2024-12-13 05:52:17.059582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:17.059588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:17.059602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:17.069583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.166 [2024-12-13 05:52:17.069640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.166 [2024-12-13 05:52:17.069652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.166 [2024-12-13 05:52:17.069659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:17.069664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:17.069679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:17.079584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.166 [2024-12-13 05:52:17.079638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.166 [2024-12-13 05:52:17.079650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.166 [2024-12-13 05:52:17.079657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:17.079663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:17.079677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:17.089612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.166 [2024-12-13 05:52:17.089672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.166 [2024-12-13 05:52:17.089688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.166 [2024-12-13 05:52:17.089694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.166 [2024-12-13 05:52:17.089699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.166 [2024-12-13 05:52:17.089713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.166 qpair failed and we were unable to recover it. 00:36:17.166 [2024-12-13 05:52:17.099624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.167 [2024-12-13 05:52:17.099679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.167 [2024-12-13 05:52:17.099691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.167 [2024-12-13 05:52:17.099697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.167 [2024-12-13 05:52:17.099703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.167 [2024-12-13 05:52:17.099718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.167 qpair failed and we were unable to recover it. 00:36:17.167 [2024-12-13 05:52:17.109700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.167 [2024-12-13 05:52:17.109766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.167 [2024-12-13 05:52:17.109779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.167 [2024-12-13 05:52:17.109785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.167 [2024-12-13 05:52:17.109791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.167 [2024-12-13 05:52:17.109805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.167 qpair failed and we were unable to recover it. 00:36:17.167 [2024-12-13 05:52:17.119679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.167 [2024-12-13 05:52:17.119734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.167 [2024-12-13 05:52:17.119747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.167 [2024-12-13 05:52:17.119753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.167 [2024-12-13 05:52:17.119759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.167 [2024-12-13 05:52:17.119773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.167 qpair failed and we were unable to recover it. 00:36:17.167 [2024-12-13 05:52:17.129759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.167 [2024-12-13 05:52:17.129831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.167 [2024-12-13 05:52:17.129843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.167 [2024-12-13 05:52:17.129852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.167 [2024-12-13 05:52:17.129858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.167 [2024-12-13 05:52:17.129873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.167 qpair failed and we were unable to recover it. 00:36:17.167 [2024-12-13 05:52:17.139744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.167 [2024-12-13 05:52:17.139827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.167 [2024-12-13 05:52:17.139839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.167 [2024-12-13 05:52:17.139845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.167 [2024-12-13 05:52:17.139851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.167 [2024-12-13 05:52:17.139865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.167 qpair failed and we were unable to recover it. 00:36:17.167 [2024-12-13 05:52:17.149767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.167 [2024-12-13 05:52:17.149837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.167 [2024-12-13 05:52:17.149849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.167 [2024-12-13 05:52:17.149855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.167 [2024-12-13 05:52:17.149861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.167 [2024-12-13 05:52:17.149875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.167 qpair failed and we were unable to recover it. 00:36:17.167 [2024-12-13 05:52:17.159780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.167 [2024-12-13 05:52:17.159855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.167 [2024-12-13 05:52:17.159867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.167 [2024-12-13 05:52:17.159874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.167 [2024-12-13 05:52:17.159879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.167 [2024-12-13 05:52:17.159893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.167 qpair failed and we were unable to recover it. 00:36:17.167 [2024-12-13 05:52:17.169827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.167 [2024-12-13 05:52:17.169878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.167 [2024-12-13 05:52:17.169890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.167 [2024-12-13 05:52:17.169897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.167 [2024-12-13 05:52:17.169902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.167 [2024-12-13 05:52:17.169916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.167 qpair failed and we were unable to recover it. 00:36:17.425 [2024-12-13 05:52:17.179853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.425 [2024-12-13 05:52:17.179912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.425 [2024-12-13 05:52:17.179930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.425 [2024-12-13 05:52:17.179938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.425 [2024-12-13 05:52:17.179946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.425 [2024-12-13 05:52:17.179965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.425 qpair failed and we were unable to recover it. 00:36:17.425 [2024-12-13 05:52:17.189916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.425 [2024-12-13 05:52:17.189971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.425 [2024-12-13 05:52:17.189987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.425 [2024-12-13 05:52:17.189994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.425 [2024-12-13 05:52:17.190000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.425 [2024-12-13 05:52:17.190017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.425 qpair failed and we were unable to recover it. 00:36:17.425 [2024-12-13 05:52:17.199849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.425 [2024-12-13 05:52:17.199908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.425 [2024-12-13 05:52:17.199921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.425 [2024-12-13 05:52:17.199927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.425 [2024-12-13 05:52:17.199933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.425 [2024-12-13 05:52:17.199948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.425 qpair failed and we were unable to recover it. 00:36:17.425 [2024-12-13 05:52:17.209958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.425 [2024-12-13 05:52:17.210011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.425 [2024-12-13 05:52:17.210025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.425 [2024-12-13 05:52:17.210031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.425 [2024-12-13 05:52:17.210038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.425 [2024-12-13 05:52:17.210052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.425 qpair failed and we were unable to recover it. 00:36:17.425 [2024-12-13 05:52:17.219989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.425 [2024-12-13 05:52:17.220045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.425 [2024-12-13 05:52:17.220058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.425 [2024-12-13 05:52:17.220064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.425 [2024-12-13 05:52:17.220069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.425 [2024-12-13 05:52:17.220084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.425 qpair failed and we were unable to recover it. 00:36:17.425 [2024-12-13 05:52:17.230006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.425 [2024-12-13 05:52:17.230052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.425 [2024-12-13 05:52:17.230065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.425 [2024-12-13 05:52:17.230071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.425 [2024-12-13 05:52:17.230076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.425 [2024-12-13 05:52:17.230090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.425 qpair failed and we were unable to recover it. 00:36:17.425 [2024-12-13 05:52:17.240064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.425 [2024-12-13 05:52:17.240120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.425 [2024-12-13 05:52:17.240133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.425 [2024-12-13 05:52:17.240139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.425 [2024-12-13 05:52:17.240145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.425 [2024-12-13 05:52:17.240159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.425 qpair failed and we were unable to recover it. 00:36:17.425 [2024-12-13 05:52:17.250050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.425 [2024-12-13 05:52:17.250102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.425 [2024-12-13 05:52:17.250114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.425 [2024-12-13 05:52:17.250121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.425 [2024-12-13 05:52:17.250127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.425 [2024-12-13 05:52:17.250141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.426 [2024-12-13 05:52:17.260075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.426 [2024-12-13 05:52:17.260132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.426 [2024-12-13 05:52:17.260145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.426 [2024-12-13 05:52:17.260154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.426 [2024-12-13 05:52:17.260160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.426 [2024-12-13 05:52:17.260174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.426 [2024-12-13 05:52:17.270108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.426 [2024-12-13 05:52:17.270166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.426 [2024-12-13 05:52:17.270179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.426 [2024-12-13 05:52:17.270186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.426 [2024-12-13 05:52:17.270191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.426 [2024-12-13 05:52:17.270205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.426 [2024-12-13 05:52:17.280065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.426 [2024-12-13 05:52:17.280122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.426 [2024-12-13 05:52:17.280134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.426 [2024-12-13 05:52:17.280140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.426 [2024-12-13 05:52:17.280146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.426 [2024-12-13 05:52:17.280160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.426 [2024-12-13 05:52:17.290178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.426 [2024-12-13 05:52:17.290234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.426 [2024-12-13 05:52:17.290247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.426 [2024-12-13 05:52:17.290253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.426 [2024-12-13 05:52:17.290258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.426 [2024-12-13 05:52:17.290272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.426 [2024-12-13 05:52:17.300195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.426 [2024-12-13 05:52:17.300242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.426 [2024-12-13 05:52:17.300254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.426 [2024-12-13 05:52:17.300261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.426 [2024-12-13 05:52:17.300266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.426 [2024-12-13 05:52:17.300284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.426 [2024-12-13 05:52:17.310222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.426 [2024-12-13 05:52:17.310274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.426 [2024-12-13 05:52:17.310286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.426 [2024-12-13 05:52:17.310292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.426 [2024-12-13 05:52:17.310298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.426 [2024-12-13 05:52:17.310312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.426 [2024-12-13 05:52:17.320250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.426 [2024-12-13 05:52:17.320303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.426 [2024-12-13 05:52:17.320316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.426 [2024-12-13 05:52:17.320321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.426 [2024-12-13 05:52:17.320327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.426 [2024-12-13 05:52:17.320341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.426 [2024-12-13 05:52:17.330292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.426 [2024-12-13 05:52:17.330359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.426 [2024-12-13 05:52:17.330371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.426 [2024-12-13 05:52:17.330378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.426 [2024-12-13 05:52:17.330383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.426 [2024-12-13 05:52:17.330397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.426 [2024-12-13 05:52:17.340304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.426 [2024-12-13 05:52:17.340358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.426 [2024-12-13 05:52:17.340370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.426 [2024-12-13 05:52:17.340376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.426 [2024-12-13 05:52:17.340382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.426 [2024-12-13 05:52:17.340396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.426 [2024-12-13 05:52:17.350261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.426 [2024-12-13 05:52:17.350315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.426 [2024-12-13 05:52:17.350328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.426 [2024-12-13 05:52:17.350334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.426 [2024-12-13 05:52:17.350340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.426 [2024-12-13 05:52:17.350354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.426 [2024-12-13 05:52:17.360369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.426 [2024-12-13 05:52:17.360425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.426 [2024-12-13 05:52:17.360438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.426 [2024-12-13 05:52:17.360444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.426 [2024-12-13 05:52:17.360454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.426 [2024-12-13 05:52:17.360468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.426 [2024-12-13 05:52:17.370398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.426 [2024-12-13 05:52:17.370453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.426 [2024-12-13 05:52:17.370465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.426 [2024-12-13 05:52:17.370471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.426 [2024-12-13 05:52:17.370477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.426 [2024-12-13 05:52:17.370492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.426 [2024-12-13 05:52:17.380452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.426 [2024-12-13 05:52:17.380499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.426 [2024-12-13 05:52:17.380512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.426 [2024-12-13 05:52:17.380518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.426 [2024-12-13 05:52:17.380523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.426 [2024-12-13 05:52:17.380537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.426 qpair failed and we were unable to recover it. 00:36:17.427 [2024-12-13 05:52:17.390414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.427 [2024-12-13 05:52:17.390471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.427 [2024-12-13 05:52:17.390487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.427 [2024-12-13 05:52:17.390494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.427 [2024-12-13 05:52:17.390500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.427 [2024-12-13 05:52:17.390514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.427 qpair failed and we were unable to recover it. 00:36:17.427 [2024-12-13 05:52:17.400487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.427 [2024-12-13 05:52:17.400545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.427 [2024-12-13 05:52:17.400557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.427 [2024-12-13 05:52:17.400563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.427 [2024-12-13 05:52:17.400569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.427 [2024-12-13 05:52:17.400583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.427 qpair failed and we were unable to recover it. 00:36:17.427 [2024-12-13 05:52:17.410507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.427 [2024-12-13 05:52:17.410570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.427 [2024-12-13 05:52:17.410582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.427 [2024-12-13 05:52:17.410588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.427 [2024-12-13 05:52:17.410593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.427 [2024-12-13 05:52:17.410607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.427 qpair failed and we were unable to recover it. 00:36:17.427 [2024-12-13 05:52:17.420541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.427 [2024-12-13 05:52:17.420593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.427 [2024-12-13 05:52:17.420606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.427 [2024-12-13 05:52:17.420612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.427 [2024-12-13 05:52:17.420619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.427 [2024-12-13 05:52:17.420633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.427 qpair failed and we were unable to recover it. 00:36:17.427 [2024-12-13 05:52:17.430566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.427 [2024-12-13 05:52:17.430618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.427 [2024-12-13 05:52:17.430631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.427 [2024-12-13 05:52:17.430638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.427 [2024-12-13 05:52:17.430646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.427 [2024-12-13 05:52:17.430664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.427 qpair failed and we were unable to recover it. 00:36:17.685 [2024-12-13 05:52:17.440613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.685 [2024-12-13 05:52:17.440674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.685 [2024-12-13 05:52:17.440693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.685 [2024-12-13 05:52:17.440701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.685 [2024-12-13 05:52:17.440707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.685 [2024-12-13 05:52:17.440726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.685 qpair failed and we were unable to recover it. 00:36:17.685 [2024-12-13 05:52:17.450617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.685 [2024-12-13 05:52:17.450714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.685 [2024-12-13 05:52:17.450730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.685 [2024-12-13 05:52:17.450737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.685 [2024-12-13 05:52:17.450743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.685 [2024-12-13 05:52:17.450759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.685 qpair failed and we were unable to recover it. 00:36:17.685 [2024-12-13 05:52:17.460655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.685 [2024-12-13 05:52:17.460732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.685 [2024-12-13 05:52:17.460745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.685 [2024-12-13 05:52:17.460752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.685 [2024-12-13 05:52:17.460757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.685 [2024-12-13 05:52:17.460772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.685 qpair failed and we were unable to recover it. 00:36:17.685 [2024-12-13 05:52:17.470699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.685 [2024-12-13 05:52:17.470754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.685 [2024-12-13 05:52:17.470767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.685 [2024-12-13 05:52:17.470773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.685 [2024-12-13 05:52:17.470779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.685 [2024-12-13 05:52:17.470794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.685 qpair failed and we were unable to recover it. 00:36:17.685 [2024-12-13 05:52:17.480716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.480770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.480783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.480789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.686 [2024-12-13 05:52:17.480795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.686 [2024-12-13 05:52:17.480810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.686 qpair failed and we were unable to recover it. 00:36:17.686 [2024-12-13 05:52:17.490736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.490789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.490801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.490807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.686 [2024-12-13 05:52:17.490813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.686 [2024-12-13 05:52:17.490827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.686 qpair failed and we were unable to recover it. 00:36:17.686 [2024-12-13 05:52:17.500768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.500824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.500837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.500843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.686 [2024-12-13 05:52:17.500849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.686 [2024-12-13 05:52:17.500862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.686 qpair failed and we were unable to recover it. 00:36:17.686 [2024-12-13 05:52:17.510794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.510845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.510857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.510864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.686 [2024-12-13 05:52:17.510869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.686 [2024-12-13 05:52:17.510883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.686 qpair failed and we were unable to recover it. 00:36:17.686 [2024-12-13 05:52:17.520854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.520928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.520944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.520951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.686 [2024-12-13 05:52:17.520957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.686 [2024-12-13 05:52:17.520972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.686 qpair failed and we were unable to recover it. 00:36:17.686 [2024-12-13 05:52:17.530876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.530930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.530942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.530948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.686 [2024-12-13 05:52:17.530954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.686 [2024-12-13 05:52:17.530969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.686 qpair failed and we were unable to recover it. 00:36:17.686 [2024-12-13 05:52:17.540877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.540925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.540938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.540944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.686 [2024-12-13 05:52:17.540950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.686 [2024-12-13 05:52:17.540964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.686 qpair failed and we were unable to recover it. 00:36:17.686 [2024-12-13 05:52:17.550878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.550951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.550964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.550970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.686 [2024-12-13 05:52:17.550976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.686 [2024-12-13 05:52:17.550990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.686 qpair failed and we were unable to recover it. 00:36:17.686 [2024-12-13 05:52:17.560905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.560966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.560978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.560984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.686 [2024-12-13 05:52:17.560995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.686 [2024-12-13 05:52:17.561009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.686 qpair failed and we were unable to recover it. 00:36:17.686 [2024-12-13 05:52:17.570969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.571026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.571039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.571045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.686 [2024-12-13 05:52:17.571051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.686 [2024-12-13 05:52:17.571065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.686 qpair failed and we were unable to recover it. 00:36:17.686 [2024-12-13 05:52:17.580993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.581086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.581100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.581107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.686 [2024-12-13 05:52:17.581112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.686 [2024-12-13 05:52:17.581126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.686 qpair failed and we were unable to recover it. 00:36:17.686 [2024-12-13 05:52:17.591019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.591070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.591082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.591089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.686 [2024-12-13 05:52:17.591095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.686 [2024-12-13 05:52:17.591108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.686 qpair failed and we were unable to recover it. 00:36:17.686 [2024-12-13 05:52:17.601103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.601161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.601174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.601180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.686 [2024-12-13 05:52:17.601186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.686 [2024-12-13 05:52:17.601200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.686 qpair failed and we were unable to recover it. 00:36:17.686 [2024-12-13 05:52:17.611073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.686 [2024-12-13 05:52:17.611129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.686 [2024-12-13 05:52:17.611141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.686 [2024-12-13 05:52:17.611147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.687 [2024-12-13 05:52:17.611153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.687 [2024-12-13 05:52:17.611167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.687 qpair failed and we were unable to recover it. 00:36:17.687 [2024-12-13 05:52:17.621127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.687 [2024-12-13 05:52:17.621202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.687 [2024-12-13 05:52:17.621214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.687 [2024-12-13 05:52:17.621220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.687 [2024-12-13 05:52:17.621226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.687 [2024-12-13 05:52:17.621240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.687 qpair failed and we were unable to recover it. 00:36:17.687 [2024-12-13 05:52:17.631162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.687 [2024-12-13 05:52:17.631212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.687 [2024-12-13 05:52:17.631224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.687 [2024-12-13 05:52:17.631230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.687 [2024-12-13 05:52:17.631235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.687 [2024-12-13 05:52:17.631249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.687 qpair failed and we were unable to recover it. 00:36:17.687 [2024-12-13 05:52:17.641209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.687 [2024-12-13 05:52:17.641262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.687 [2024-12-13 05:52:17.641274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.687 [2024-12-13 05:52:17.641280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.687 [2024-12-13 05:52:17.641286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.687 [2024-12-13 05:52:17.641300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.687 qpair failed and we were unable to recover it. 00:36:17.687 [2024-12-13 05:52:17.651252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.687 [2024-12-13 05:52:17.651355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.687 [2024-12-13 05:52:17.651371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.687 [2024-12-13 05:52:17.651377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.687 [2024-12-13 05:52:17.651383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.687 [2024-12-13 05:52:17.651397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.687 qpair failed and we were unable to recover it. 00:36:17.687 [2024-12-13 05:52:17.661216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.687 [2024-12-13 05:52:17.661273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.687 [2024-12-13 05:52:17.661286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.687 [2024-12-13 05:52:17.661293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.687 [2024-12-13 05:52:17.661299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.687 [2024-12-13 05:52:17.661313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.687 qpair failed and we were unable to recover it. 00:36:17.687 [2024-12-13 05:52:17.671252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.687 [2024-12-13 05:52:17.671348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.687 [2024-12-13 05:52:17.671360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.687 [2024-12-13 05:52:17.671366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.687 [2024-12-13 05:52:17.671372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.687 [2024-12-13 05:52:17.671386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.687 qpair failed and we were unable to recover it. 00:36:17.687 [2024-12-13 05:52:17.681282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.687 [2024-12-13 05:52:17.681339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.687 [2024-12-13 05:52:17.681352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.687 [2024-12-13 05:52:17.681358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.687 [2024-12-13 05:52:17.681364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.687 [2024-12-13 05:52:17.681378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.687 qpair failed and we were unable to recover it. 00:36:17.687 [2024-12-13 05:52:17.691331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.687 [2024-12-13 05:52:17.691387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.687 [2024-12-13 05:52:17.691400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.687 [2024-12-13 05:52:17.691410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.687 [2024-12-13 05:52:17.691417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.687 [2024-12-13 05:52:17.691432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.687 qpair failed and we were unable to recover it. 00:36:17.947 [2024-12-13 05:52:17.701392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.947 [2024-12-13 05:52:17.701455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.947 [2024-12-13 05:52:17.701472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.947 [2024-12-13 05:52:17.701479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.947 [2024-12-13 05:52:17.701485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.947 [2024-12-13 05:52:17.701502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.947 qpair failed and we were unable to recover it. 00:36:17.947 [2024-12-13 05:52:17.711366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.947 [2024-12-13 05:52:17.711420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.947 [2024-12-13 05:52:17.711437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.947 [2024-12-13 05:52:17.711444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.947 [2024-12-13 05:52:17.711454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.947 [2024-12-13 05:52:17.711470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.947 qpair failed and we were unable to recover it. 00:36:17.947 [2024-12-13 05:52:17.721412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.947 [2024-12-13 05:52:17.721468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.947 [2024-12-13 05:52:17.721482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.947 [2024-12-13 05:52:17.721488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.947 [2024-12-13 05:52:17.721494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.947 [2024-12-13 05:52:17.721509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.947 qpair failed and we were unable to recover it. 00:36:17.947 [2024-12-13 05:52:17.731430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.947 [2024-12-13 05:52:17.731531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.947 [2024-12-13 05:52:17.731543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.947 [2024-12-13 05:52:17.731550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.947 [2024-12-13 05:52:17.731556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.947 [2024-12-13 05:52:17.731570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.947 qpair failed and we were unable to recover it. 00:36:17.947 [2024-12-13 05:52:17.741469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.947 [2024-12-13 05:52:17.741523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.947 [2024-12-13 05:52:17.741536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.947 [2024-12-13 05:52:17.741543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.947 [2024-12-13 05:52:17.741548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.947 [2024-12-13 05:52:17.741563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.947 qpair failed and we were unable to recover it. 00:36:17.947 [2024-12-13 05:52:17.751483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.947 [2024-12-13 05:52:17.751533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.947 [2024-12-13 05:52:17.751546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.947 [2024-12-13 05:52:17.751552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.947 [2024-12-13 05:52:17.751558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.947 [2024-12-13 05:52:17.751572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.947 qpair failed and we were unable to recover it. 00:36:17.947 [2024-12-13 05:52:17.761523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.947 [2024-12-13 05:52:17.761583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.947 [2024-12-13 05:52:17.761596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.947 [2024-12-13 05:52:17.761602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.947 [2024-12-13 05:52:17.761608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.947 [2024-12-13 05:52:17.761622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.947 qpair failed and we were unable to recover it. 00:36:17.947 [2024-12-13 05:52:17.771546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.947 [2024-12-13 05:52:17.771601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.947 [2024-12-13 05:52:17.771613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.947 [2024-12-13 05:52:17.771619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.947 [2024-12-13 05:52:17.771625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.947 [2024-12-13 05:52:17.771640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.947 qpair failed and we were unable to recover it. 00:36:17.947 [2024-12-13 05:52:17.781573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.947 [2024-12-13 05:52:17.781627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.947 [2024-12-13 05:52:17.781639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.947 [2024-12-13 05:52:17.781646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.947 [2024-12-13 05:52:17.781651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.947 [2024-12-13 05:52:17.781665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.947 qpair failed and we were unable to recover it. 00:36:17.947 [2024-12-13 05:52:17.791620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.947 [2024-12-13 05:52:17.791677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.947 [2024-12-13 05:52:17.791689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.947 [2024-12-13 05:52:17.791696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.947 [2024-12-13 05:52:17.791701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.947 [2024-12-13 05:52:17.791716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.947 qpair failed and we were unable to recover it. 00:36:17.947 [2024-12-13 05:52:17.801647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.947 [2024-12-13 05:52:17.801703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.947 [2024-12-13 05:52:17.801716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.947 [2024-12-13 05:52:17.801723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.947 [2024-12-13 05:52:17.801729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.947 [2024-12-13 05:52:17.801742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.947 qpair failed and we were unable to recover it. 00:36:17.947 [2024-12-13 05:52:17.811659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.947 [2024-12-13 05:52:17.811717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.947 [2024-12-13 05:52:17.811729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.947 [2024-12-13 05:52:17.811735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.947 [2024-12-13 05:52:17.811742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.947 [2024-12-13 05:52:17.811756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.947 qpair failed and we were unable to recover it. 00:36:17.947 [2024-12-13 05:52:17.821618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.947 [2024-12-13 05:52:17.821684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.821697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.821705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.948 [2024-12-13 05:52:17.821711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.948 [2024-12-13 05:52:17.821725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.948 qpair failed and we were unable to recover it. 00:36:17.948 [2024-12-13 05:52:17.831760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.948 [2024-12-13 05:52:17.831817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.831830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.831836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.948 [2024-12-13 05:52:17.831843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.948 [2024-12-13 05:52:17.831857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.948 qpair failed and we were unable to recover it. 00:36:17.948 [2024-12-13 05:52:17.841681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.948 [2024-12-13 05:52:17.841736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.841749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.841755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.948 [2024-12-13 05:52:17.841761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.948 [2024-12-13 05:52:17.841775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.948 qpair failed and we were unable to recover it. 00:36:17.948 [2024-12-13 05:52:17.851769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.948 [2024-12-13 05:52:17.851827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.851839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.851845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.948 [2024-12-13 05:52:17.851851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.948 [2024-12-13 05:52:17.851865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.948 qpair failed and we were unable to recover it. 00:36:17.948 [2024-12-13 05:52:17.861776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.948 [2024-12-13 05:52:17.861833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.861845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.861852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.948 [2024-12-13 05:52:17.861857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.948 [2024-12-13 05:52:17.861875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.948 qpair failed and we were unable to recover it. 00:36:17.948 [2024-12-13 05:52:17.871843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.948 [2024-12-13 05:52:17.871896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.871909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.871916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.948 [2024-12-13 05:52:17.871922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.948 [2024-12-13 05:52:17.871936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.948 qpair failed and we were unable to recover it. 00:36:17.948 [2024-12-13 05:52:17.881872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.948 [2024-12-13 05:52:17.881972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.881984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.881990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.948 [2024-12-13 05:52:17.881996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.948 [2024-12-13 05:52:17.882010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.948 qpair failed and we were unable to recover it. 00:36:17.948 [2024-12-13 05:52:17.891887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.948 [2024-12-13 05:52:17.891938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.891951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.891957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.948 [2024-12-13 05:52:17.891963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.948 [2024-12-13 05:52:17.891976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.948 qpair failed and we were unable to recover it. 00:36:17.948 [2024-12-13 05:52:17.901859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.948 [2024-12-13 05:52:17.901910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.901924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.901930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.948 [2024-12-13 05:52:17.901936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.948 [2024-12-13 05:52:17.901950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.948 qpair failed and we were unable to recover it. 00:36:17.948 [2024-12-13 05:52:17.911931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.948 [2024-12-13 05:52:17.912005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.912019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.912025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.948 [2024-12-13 05:52:17.912031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.948 [2024-12-13 05:52:17.912046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.948 qpair failed and we were unable to recover it. 00:36:17.948 [2024-12-13 05:52:17.921976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.948 [2024-12-13 05:52:17.922031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.922044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.922050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.948 [2024-12-13 05:52:17.922056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.948 [2024-12-13 05:52:17.922070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.948 qpair failed and we were unable to recover it. 00:36:17.948 [2024-12-13 05:52:17.932028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.948 [2024-12-13 05:52:17.932093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.932106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.932113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.948 [2024-12-13 05:52:17.932119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.948 [2024-12-13 05:52:17.932133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.948 qpair failed and we were unable to recover it. 00:36:17.948 [2024-12-13 05:52:17.942022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.948 [2024-12-13 05:52:17.942080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.942092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.942099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.948 [2024-12-13 05:52:17.942105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.948 [2024-12-13 05:52:17.942119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.948 qpair failed and we were unable to recover it. 00:36:17.948 [2024-12-13 05:52:17.952046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.948 [2024-12-13 05:52:17.952101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.948 [2024-12-13 05:52:17.952116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.948 [2024-12-13 05:52:17.952123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.949 [2024-12-13 05:52:17.952128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:17.949 [2024-12-13 05:52:17.952142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.949 qpair failed and we were unable to recover it. 00:36:18.207 [2024-12-13 05:52:17.962100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.207 [2024-12-13 05:52:17.962168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.207 [2024-12-13 05:52:17.962185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.207 [2024-12-13 05:52:17.962192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.207 [2024-12-13 05:52:17.962198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.207 [2024-12-13 05:52:17.962214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-12-13 05:52:17.972078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.207 [2024-12-13 05:52:17.972137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.207 [2024-12-13 05:52:17.972153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.207 [2024-12-13 05:52:17.972160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.207 [2024-12-13 05:52:17.972166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.207 [2024-12-13 05:52:17.972182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:17.982143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.208 [2024-12-13 05:52:17.982208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.208 [2024-12-13 05:52:17.982221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.208 [2024-12-13 05:52:17.982228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.208 [2024-12-13 05:52:17.982233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.208 [2024-12-13 05:52:17.982248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:17.992243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.208 [2024-12-13 05:52:17.992334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.208 [2024-12-13 05:52:17.992347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.208 [2024-12-13 05:52:17.992354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.208 [2024-12-13 05:52:17.992363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.208 [2024-12-13 05:52:17.992377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:18.002149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.208 [2024-12-13 05:52:18.002212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.208 [2024-12-13 05:52:18.002225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.208 [2024-12-13 05:52:18.002231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.208 [2024-12-13 05:52:18.002237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.208 [2024-12-13 05:52:18.002252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:18.012263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.208 [2024-12-13 05:52:18.012317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.208 [2024-12-13 05:52:18.012331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.208 [2024-12-13 05:52:18.012337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.208 [2024-12-13 05:52:18.012343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.208 [2024-12-13 05:52:18.012357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:18.022287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.208 [2024-12-13 05:52:18.022333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.208 [2024-12-13 05:52:18.022346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.208 [2024-12-13 05:52:18.022352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.208 [2024-12-13 05:52:18.022358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.208 [2024-12-13 05:52:18.022373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:18.032312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.208 [2024-12-13 05:52:18.032407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.208 [2024-12-13 05:52:18.032420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.208 [2024-12-13 05:52:18.032426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.208 [2024-12-13 05:52:18.032431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.208 [2024-12-13 05:52:18.032445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:18.042310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.208 [2024-12-13 05:52:18.042368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.208 [2024-12-13 05:52:18.042381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.208 [2024-12-13 05:52:18.042388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.208 [2024-12-13 05:52:18.042394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.208 [2024-12-13 05:52:18.042408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:18.052366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.208 [2024-12-13 05:52:18.052415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.208 [2024-12-13 05:52:18.052428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.208 [2024-12-13 05:52:18.052434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.208 [2024-12-13 05:52:18.052440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.208 [2024-12-13 05:52:18.052457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:18.062393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.208 [2024-12-13 05:52:18.062446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.208 [2024-12-13 05:52:18.062464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.208 [2024-12-13 05:52:18.062470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.208 [2024-12-13 05:52:18.062476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.208 [2024-12-13 05:52:18.062490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:18.072468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.208 [2024-12-13 05:52:18.072526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.208 [2024-12-13 05:52:18.072538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.208 [2024-12-13 05:52:18.072544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.208 [2024-12-13 05:52:18.072550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.208 [2024-12-13 05:52:18.072564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:18.082508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.208 [2024-12-13 05:52:18.082566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.208 [2024-12-13 05:52:18.082582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.208 [2024-12-13 05:52:18.082588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.208 [2024-12-13 05:52:18.082594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.208 [2024-12-13 05:52:18.082608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:18.092456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.208 [2024-12-13 05:52:18.092514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.208 [2024-12-13 05:52:18.092526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.208 [2024-12-13 05:52:18.092532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.208 [2024-12-13 05:52:18.092538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.208 [2024-12-13 05:52:18.092553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:18.102501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.208 [2024-12-13 05:52:18.102555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.208 [2024-12-13 05:52:18.102567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.208 [2024-12-13 05:52:18.102573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.208 [2024-12-13 05:52:18.102579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.208 [2024-12-13 05:52:18.102592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-12-13 05:52:18.112470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.209 [2024-12-13 05:52:18.112524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.209 [2024-12-13 05:52:18.112537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.209 [2024-12-13 05:52:18.112543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.209 [2024-12-13 05:52:18.112549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.209 [2024-12-13 05:52:18.112563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-12-13 05:52:18.122598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.209 [2024-12-13 05:52:18.122696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.209 [2024-12-13 05:52:18.122708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.209 [2024-12-13 05:52:18.122715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.209 [2024-12-13 05:52:18.122723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.209 [2024-12-13 05:52:18.122738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-12-13 05:52:18.132596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.209 [2024-12-13 05:52:18.132675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.209 [2024-12-13 05:52:18.132688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.209 [2024-12-13 05:52:18.132694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.209 [2024-12-13 05:52:18.132700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.209 [2024-12-13 05:52:18.132714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-12-13 05:52:18.142592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.209 [2024-12-13 05:52:18.142646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.209 [2024-12-13 05:52:18.142659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.209 [2024-12-13 05:52:18.142665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.209 [2024-12-13 05:52:18.142671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.209 [2024-12-13 05:52:18.142685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-12-13 05:52:18.152608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.209 [2024-12-13 05:52:18.152659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.209 [2024-12-13 05:52:18.152671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.209 [2024-12-13 05:52:18.152677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.209 [2024-12-13 05:52:18.152682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.209 [2024-12-13 05:52:18.152697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-12-13 05:52:18.162632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.209 [2024-12-13 05:52:18.162685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.209 [2024-12-13 05:52:18.162697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.209 [2024-12-13 05:52:18.162703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.209 [2024-12-13 05:52:18.162708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.209 [2024-12-13 05:52:18.162722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-12-13 05:52:18.172661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.209 [2024-12-13 05:52:18.172718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.209 [2024-12-13 05:52:18.172730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.209 [2024-12-13 05:52:18.172736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.209 [2024-12-13 05:52:18.172742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.209 [2024-12-13 05:52:18.172756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-12-13 05:52:18.182650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.209 [2024-12-13 05:52:18.182704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.209 [2024-12-13 05:52:18.182716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.209 [2024-12-13 05:52:18.182722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.209 [2024-12-13 05:52:18.182728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.209 [2024-12-13 05:52:18.182742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-12-13 05:52:18.192736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.209 [2024-12-13 05:52:18.192792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.209 [2024-12-13 05:52:18.192804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.209 [2024-12-13 05:52:18.192810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.209 [2024-12-13 05:52:18.192816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.209 [2024-12-13 05:52:18.192829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-12-13 05:52:18.202792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.209 [2024-12-13 05:52:18.202847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.209 [2024-12-13 05:52:18.202859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.209 [2024-12-13 05:52:18.202865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.209 [2024-12-13 05:52:18.202871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.209 [2024-12-13 05:52:18.202885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-12-13 05:52:18.212756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.209 [2024-12-13 05:52:18.212809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.209 [2024-12-13 05:52:18.212825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.209 [2024-12-13 05:52:18.212831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.209 [2024-12-13 05:52:18.212837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.209 [2024-12-13 05:52:18.212850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.468 [2024-12-13 05:52:18.222895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.468 [2024-12-13 05:52:18.223000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.468 [2024-12-13 05:52:18.223018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.468 [2024-12-13 05:52:18.223026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.468 [2024-12-13 05:52:18.223033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.468 [2024-12-13 05:52:18.223051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.468 qpair failed and we were unable to recover it. 00:36:18.468 [2024-12-13 05:52:18.232875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.468 [2024-12-13 05:52:18.232926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.468 [2024-12-13 05:52:18.232942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.468 [2024-12-13 05:52:18.232950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.468 [2024-12-13 05:52:18.232956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.468 [2024-12-13 05:52:18.232972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.468 qpair failed and we were unable to recover it. 00:36:18.468 [2024-12-13 05:52:18.242843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.468 [2024-12-13 05:52:18.242905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.468 [2024-12-13 05:52:18.242919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.468 [2024-12-13 05:52:18.242925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.468 [2024-12-13 05:52:18.242931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.468 [2024-12-13 05:52:18.242946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.468 qpair failed and we were unable to recover it. 00:36:18.468 [2024-12-13 05:52:18.252993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.468 [2024-12-13 05:52:18.253052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.468 [2024-12-13 05:52:18.253065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.468 [2024-12-13 05:52:18.253076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.468 [2024-12-13 05:52:18.253082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.468 [2024-12-13 05:52:18.253097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.468 qpair failed and we were unable to recover it. 00:36:18.468 [2024-12-13 05:52:18.262998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.468 [2024-12-13 05:52:18.263052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.468 [2024-12-13 05:52:18.263065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.468 [2024-12-13 05:52:18.263071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.468 [2024-12-13 05:52:18.263077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.468 [2024-12-13 05:52:18.263091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.468 qpair failed and we were unable to recover it. 00:36:18.468 [2024-12-13 05:52:18.272992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.468 [2024-12-13 05:52:18.273070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.468 [2024-12-13 05:52:18.273082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.468 [2024-12-13 05:52:18.273088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.468 [2024-12-13 05:52:18.273093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.273107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.283021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.469 [2024-12-13 05:52:18.283076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.469 [2024-12-13 05:52:18.283089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.469 [2024-12-13 05:52:18.283095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.469 [2024-12-13 05:52:18.283100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.283114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.293048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.469 [2024-12-13 05:52:18.293148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.469 [2024-12-13 05:52:18.293161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.469 [2024-12-13 05:52:18.293167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.469 [2024-12-13 05:52:18.293173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.293186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.303090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.469 [2024-12-13 05:52:18.303150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.469 [2024-12-13 05:52:18.303162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.469 [2024-12-13 05:52:18.303168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.469 [2024-12-13 05:52:18.303174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.303188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.313019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.469 [2024-12-13 05:52:18.313072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.469 [2024-12-13 05:52:18.313084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.469 [2024-12-13 05:52:18.313091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.469 [2024-12-13 05:52:18.313097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.313110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.323129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.469 [2024-12-13 05:52:18.323191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.469 [2024-12-13 05:52:18.323204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.469 [2024-12-13 05:52:18.323210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.469 [2024-12-13 05:52:18.323216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.323230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.333161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.469 [2024-12-13 05:52:18.333214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.469 [2024-12-13 05:52:18.333227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.469 [2024-12-13 05:52:18.333233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.469 [2024-12-13 05:52:18.333239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.333253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.343225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.469 [2024-12-13 05:52:18.343288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.469 [2024-12-13 05:52:18.343300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.469 [2024-12-13 05:52:18.343307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.469 [2024-12-13 05:52:18.343312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.343326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.353211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.469 [2024-12-13 05:52:18.353265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.469 [2024-12-13 05:52:18.353278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.469 [2024-12-13 05:52:18.353284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.469 [2024-12-13 05:52:18.353291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.353304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.363249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.469 [2024-12-13 05:52:18.363304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.469 [2024-12-13 05:52:18.363317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.469 [2024-12-13 05:52:18.363323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.469 [2024-12-13 05:52:18.363329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.363343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.373274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.469 [2024-12-13 05:52:18.373330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.469 [2024-12-13 05:52:18.373343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.469 [2024-12-13 05:52:18.373349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.469 [2024-12-13 05:52:18.373355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.373369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.383231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.469 [2024-12-13 05:52:18.383287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.469 [2024-12-13 05:52:18.383299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.469 [2024-12-13 05:52:18.383308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.469 [2024-12-13 05:52:18.383314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.383328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.393369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.469 [2024-12-13 05:52:18.393437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.469 [2024-12-13 05:52:18.393453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.469 [2024-12-13 05:52:18.393459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.469 [2024-12-13 05:52:18.393465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.393479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.403403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.469 [2024-12-13 05:52:18.403505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.469 [2024-12-13 05:52:18.403518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.469 [2024-12-13 05:52:18.403524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.469 [2024-12-13 05:52:18.403529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.469 [2024-12-13 05:52:18.403544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.469 qpair failed and we were unable to recover it. 00:36:18.469 [2024-12-13 05:52:18.413389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.470 [2024-12-13 05:52:18.413444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.470 [2024-12-13 05:52:18.413459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.470 [2024-12-13 05:52:18.413466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.470 [2024-12-13 05:52:18.413471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.470 [2024-12-13 05:52:18.413485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.470 qpair failed and we were unable to recover it. 00:36:18.470 [2024-12-13 05:52:18.423464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.470 [2024-12-13 05:52:18.423520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.470 [2024-12-13 05:52:18.423532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.470 [2024-12-13 05:52:18.423539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.470 [2024-12-13 05:52:18.423544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.470 [2024-12-13 05:52:18.423562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.470 qpair failed and we were unable to recover it. 00:36:18.470 [2024-12-13 05:52:18.433435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.470 [2024-12-13 05:52:18.433494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.470 [2024-12-13 05:52:18.433506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.470 [2024-12-13 05:52:18.433513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.470 [2024-12-13 05:52:18.433518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.470 [2024-12-13 05:52:18.433533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.470 qpair failed and we were unable to recover it. 00:36:18.470 [2024-12-13 05:52:18.443473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.470 [2024-12-13 05:52:18.443531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.470 [2024-12-13 05:52:18.443544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.470 [2024-12-13 05:52:18.443550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.470 [2024-12-13 05:52:18.443556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.470 [2024-12-13 05:52:18.443570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.470 qpair failed and we were unable to recover it. 00:36:18.470 [2024-12-13 05:52:18.453497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.470 [2024-12-13 05:52:18.453550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.470 [2024-12-13 05:52:18.453562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.470 [2024-12-13 05:52:18.453569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.470 [2024-12-13 05:52:18.453574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.470 [2024-12-13 05:52:18.453588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.470 qpair failed and we were unable to recover it. 00:36:18.470 [2024-12-13 05:52:18.463570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.470 [2024-12-13 05:52:18.463625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.470 [2024-12-13 05:52:18.463636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.470 [2024-12-13 05:52:18.463643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.470 [2024-12-13 05:52:18.463648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.470 [2024-12-13 05:52:18.463662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.470 qpair failed and we were unable to recover it. 00:36:18.470 [2024-12-13 05:52:18.473544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.470 [2024-12-13 05:52:18.473598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.470 [2024-12-13 05:52:18.473611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.470 [2024-12-13 05:52:18.473617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.470 [2024-12-13 05:52:18.473623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.470 [2024-12-13 05:52:18.473637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.470 qpair failed and we were unable to recover it. 00:36:18.729 [2024-12-13 05:52:18.483579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.729 [2024-12-13 05:52:18.483654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.729 [2024-12-13 05:52:18.483672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.729 [2024-12-13 05:52:18.483680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.729 [2024-12-13 05:52:18.483686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.729 [2024-12-13 05:52:18.483704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.729 qpair failed and we were unable to recover it. 00:36:18.729 [2024-12-13 05:52:18.493611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.729 [2024-12-13 05:52:18.493662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.729 [2024-12-13 05:52:18.493678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.729 [2024-12-13 05:52:18.493685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.729 [2024-12-13 05:52:18.493691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.729 [2024-12-13 05:52:18.493707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.729 qpair failed and we were unable to recover it. 00:36:18.729 [2024-12-13 05:52:18.503632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.729 [2024-12-13 05:52:18.503680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.729 [2024-12-13 05:52:18.503693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.729 [2024-12-13 05:52:18.503699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.729 [2024-12-13 05:52:18.503705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.729 [2024-12-13 05:52:18.503720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.729 qpair failed and we were unable to recover it. 00:36:18.729 [2024-12-13 05:52:18.513603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.729 [2024-12-13 05:52:18.513656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.729 [2024-12-13 05:52:18.513672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.729 [2024-12-13 05:52:18.513678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.729 [2024-12-13 05:52:18.513684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.729 [2024-12-13 05:52:18.513698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.729 qpair failed and we were unable to recover it. 00:36:18.729 [2024-12-13 05:52:18.523702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.729 [2024-12-13 05:52:18.523757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.729 [2024-12-13 05:52:18.523770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.729 [2024-12-13 05:52:18.523776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.729 [2024-12-13 05:52:18.523782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.729 [2024-12-13 05:52:18.523796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.729 qpair failed and we were unable to recover it. 00:36:18.729 [2024-12-13 05:52:18.533718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.729 [2024-12-13 05:52:18.533778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.729 [2024-12-13 05:52:18.533792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.729 [2024-12-13 05:52:18.533798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.729 [2024-12-13 05:52:18.533804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.729 [2024-12-13 05:52:18.533818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.729 qpair failed and we were unable to recover it. 00:36:18.729 [2024-12-13 05:52:18.543737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.729 [2024-12-13 05:52:18.543804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.729 [2024-12-13 05:52:18.543817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.729 [2024-12-13 05:52:18.543823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.729 [2024-12-13 05:52:18.543829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.729 [2024-12-13 05:52:18.543844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.729 qpair failed and we were unable to recover it. 00:36:18.729 [2024-12-13 05:52:18.553679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.729 [2024-12-13 05:52:18.553731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.729 [2024-12-13 05:52:18.553743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.729 [2024-12-13 05:52:18.553750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.729 [2024-12-13 05:52:18.553759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.729 [2024-12-13 05:52:18.553773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.729 qpair failed and we were unable to recover it. 00:36:18.729 [2024-12-13 05:52:18.563780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.729 [2024-12-13 05:52:18.563835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.729 [2024-12-13 05:52:18.563848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.729 [2024-12-13 05:52:18.563854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.729 [2024-12-13 05:52:18.563860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.729 [2024-12-13 05:52:18.563874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.729 qpair failed and we were unable to recover it. 00:36:18.729 [2024-12-13 05:52:18.573824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.729 [2024-12-13 05:52:18.573900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.729 [2024-12-13 05:52:18.573912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.729 [2024-12-13 05:52:18.573919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.729 [2024-12-13 05:52:18.573924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.729 [2024-12-13 05:52:18.573939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.729 qpair failed and we were unable to recover it. 00:36:18.729 [2024-12-13 05:52:18.583864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.729 [2024-12-13 05:52:18.583949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.729 [2024-12-13 05:52:18.583961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.729 [2024-12-13 05:52:18.583967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.729 [2024-12-13 05:52:18.583973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.729 [2024-12-13 05:52:18.583987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.729 qpair failed and we were unable to recover it. 00:36:18.729 [2024-12-13 05:52:18.593895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.729 [2024-12-13 05:52:18.593948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.729 [2024-12-13 05:52:18.593960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.729 [2024-12-13 05:52:18.593966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.729 [2024-12-13 05:52:18.593972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.729 [2024-12-13 05:52:18.593985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.729 qpair failed and we were unable to recover it. 00:36:18.729 [2024-12-13 05:52:18.603924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.729 [2024-12-13 05:52:18.603980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.729 [2024-12-13 05:52:18.603992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.729 [2024-12-13 05:52:18.603998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.604004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.730 [2024-12-13 05:52:18.604018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.730 qpair failed and we were unable to recover it. 00:36:18.730 [2024-12-13 05:52:18.613967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.730 [2024-12-13 05:52:18.614021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.730 [2024-12-13 05:52:18.614034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.730 [2024-12-13 05:52:18.614040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.614046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.730 [2024-12-13 05:52:18.614059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.730 qpair failed and we were unable to recover it. 00:36:18.730 [2024-12-13 05:52:18.623958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.730 [2024-12-13 05:52:18.624010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.730 [2024-12-13 05:52:18.624022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.730 [2024-12-13 05:52:18.624028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.624034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.730 [2024-12-13 05:52:18.624048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.730 qpair failed and we were unable to recover it. 00:36:18.730 [2024-12-13 05:52:18.633948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.730 [2024-12-13 05:52:18.634037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.730 [2024-12-13 05:52:18.634049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.730 [2024-12-13 05:52:18.634055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.634060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.730 [2024-12-13 05:52:18.634074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.730 qpair failed and we were unable to recover it. 00:36:18.730 [2024-12-13 05:52:18.644042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.730 [2024-12-13 05:52:18.644104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.730 [2024-12-13 05:52:18.644120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.730 [2024-12-13 05:52:18.644126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.644131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.730 [2024-12-13 05:52:18.644145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.730 qpair failed and we were unable to recover it. 00:36:18.730 [2024-12-13 05:52:18.653985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.730 [2024-12-13 05:52:18.654040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.730 [2024-12-13 05:52:18.654053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.730 [2024-12-13 05:52:18.654059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.654064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.730 [2024-12-13 05:52:18.654079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.730 qpair failed and we were unable to recover it. 00:36:18.730 [2024-12-13 05:52:18.664077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.730 [2024-12-13 05:52:18.664139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.730 [2024-12-13 05:52:18.664151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.730 [2024-12-13 05:52:18.664158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.664163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.730 [2024-12-13 05:52:18.664177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.730 qpair failed and we were unable to recover it. 00:36:18.730 [2024-12-13 05:52:18.674042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.730 [2024-12-13 05:52:18.674104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.730 [2024-12-13 05:52:18.674116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.730 [2024-12-13 05:52:18.674123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.674128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.730 [2024-12-13 05:52:18.674142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.730 qpair failed and we were unable to recover it. 00:36:18.730 [2024-12-13 05:52:18.684148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.730 [2024-12-13 05:52:18.684203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.730 [2024-12-13 05:52:18.684215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.730 [2024-12-13 05:52:18.684222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.684230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.730 [2024-12-13 05:52:18.684244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.730 qpair failed and we were unable to recover it. 00:36:18.730 [2024-12-13 05:52:18.694165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.730 [2024-12-13 05:52:18.694216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.730 [2024-12-13 05:52:18.694228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.730 [2024-12-13 05:52:18.694235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.694241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.730 [2024-12-13 05:52:18.694255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.730 qpair failed and we were unable to recover it. 00:36:18.730 [2024-12-13 05:52:18.704199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.730 [2024-12-13 05:52:18.704249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.730 [2024-12-13 05:52:18.704262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.730 [2024-12-13 05:52:18.704268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.704274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.730 [2024-12-13 05:52:18.704288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.730 qpair failed and we were unable to recover it. 00:36:18.730 [2024-12-13 05:52:18.714235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.730 [2024-12-13 05:52:18.714285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.730 [2024-12-13 05:52:18.714297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.730 [2024-12-13 05:52:18.714303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.714309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.730 [2024-12-13 05:52:18.714322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.730 qpair failed and we were unable to recover it. 00:36:18.730 [2024-12-13 05:52:18.724261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.730 [2024-12-13 05:52:18.724317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.730 [2024-12-13 05:52:18.724329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.730 [2024-12-13 05:52:18.724335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.724341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.730 [2024-12-13 05:52:18.724355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.730 qpair failed and we were unable to recover it. 00:36:18.730 [2024-12-13 05:52:18.734290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.730 [2024-12-13 05:52:18.734345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.730 [2024-12-13 05:52:18.734357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.730 [2024-12-13 05:52:18.734364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.730 [2024-12-13 05:52:18.734369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.731 [2024-12-13 05:52:18.734383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.731 qpair failed and we were unable to recover it. 00:36:18.989 [2024-12-13 05:52:18.744273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.989 [2024-12-13 05:52:18.744328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.989 [2024-12-13 05:52:18.744345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.989 [2024-12-13 05:52:18.744352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.989 [2024-12-13 05:52:18.744358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.990 [2024-12-13 05:52:18.744375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.990 qpair failed and we were unable to recover it. 00:36:18.990 [2024-12-13 05:52:18.754398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.990 [2024-12-13 05:52:18.754455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.990 [2024-12-13 05:52:18.754472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.990 [2024-12-13 05:52:18.754478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.990 [2024-12-13 05:52:18.754484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.990 [2024-12-13 05:52:18.754501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.990 qpair failed and we were unable to recover it. 00:36:18.990 [2024-12-13 05:52:18.764373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.990 [2024-12-13 05:52:18.764429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.990 [2024-12-13 05:52:18.764441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.990 [2024-12-13 05:52:18.764451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.990 [2024-12-13 05:52:18.764457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.990 [2024-12-13 05:52:18.764472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.990 qpair failed and we were unable to recover it. 00:36:18.990 [2024-12-13 05:52:18.774417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.990 [2024-12-13 05:52:18.774473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.990 [2024-12-13 05:52:18.774489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.990 [2024-12-13 05:52:18.774496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.990 [2024-12-13 05:52:18.774501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.990 [2024-12-13 05:52:18.774516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.990 qpair failed and we were unable to recover it. 00:36:18.990 [2024-12-13 05:52:18.784435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.990 [2024-12-13 05:52:18.784486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.990 [2024-12-13 05:52:18.784499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.990 [2024-12-13 05:52:18.784505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.990 [2024-12-13 05:52:18.784511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.990 [2024-12-13 05:52:18.784525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.990 qpair failed and we were unable to recover it. 00:36:18.990 [2024-12-13 05:52:18.794450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.990 [2024-12-13 05:52:18.794499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.990 [2024-12-13 05:52:18.794511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.990 [2024-12-13 05:52:18.794518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.990 [2024-12-13 05:52:18.794523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.990 [2024-12-13 05:52:18.794538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.990 qpair failed and we were unable to recover it. 00:36:18.990 [2024-12-13 05:52:18.804475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.990 [2024-12-13 05:52:18.804570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.990 [2024-12-13 05:52:18.804582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.990 [2024-12-13 05:52:18.804588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.990 [2024-12-13 05:52:18.804594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.990 [2024-12-13 05:52:18.804608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.990 qpair failed and we were unable to recover it. 00:36:18.990 [2024-12-13 05:52:18.814578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.990 [2024-12-13 05:52:18.814638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.990 [2024-12-13 05:52:18.814650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.990 [2024-12-13 05:52:18.814660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.990 [2024-12-13 05:52:18.814666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.990 [2024-12-13 05:52:18.814681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.990 qpair failed and we were unable to recover it. 00:36:18.990 [2024-12-13 05:52:18.824544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.990 [2024-12-13 05:52:18.824596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.990 [2024-12-13 05:52:18.824608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.990 [2024-12-13 05:52:18.824614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.990 [2024-12-13 05:52:18.824621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.990 [2024-12-13 05:52:18.824635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.990 qpair failed and we were unable to recover it. 00:36:18.990 [2024-12-13 05:52:18.834567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.990 [2024-12-13 05:52:18.834619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.990 [2024-12-13 05:52:18.834631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.990 [2024-12-13 05:52:18.834637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.990 [2024-12-13 05:52:18.834643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.990 [2024-12-13 05:52:18.834657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.990 qpair failed and we were unable to recover it. 00:36:18.990 [2024-12-13 05:52:18.844561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.990 [2024-12-13 05:52:18.844637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.990 [2024-12-13 05:52:18.844650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.990 [2024-12-13 05:52:18.844656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.990 [2024-12-13 05:52:18.844662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.990 [2024-12-13 05:52:18.844677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.990 qpair failed and we were unable to recover it. 00:36:18.990 [2024-12-13 05:52:18.854635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.990 [2024-12-13 05:52:18.854689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.990 [2024-12-13 05:52:18.854702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.990 [2024-12-13 05:52:18.854708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.990 [2024-12-13 05:52:18.854715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.990 [2024-12-13 05:52:18.854732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.990 qpair failed and we were unable to recover it. 00:36:18.990 [2024-12-13 05:52:18.864661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.990 [2024-12-13 05:52:18.864709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.990 [2024-12-13 05:52:18.864721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.990 [2024-12-13 05:52:18.864728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.990 [2024-12-13 05:52:18.864733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.990 [2024-12-13 05:52:18.864747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.990 qpair failed and we were unable to recover it. 00:36:18.991 [2024-12-13 05:52:18.874687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.991 [2024-12-13 05:52:18.874743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.991 [2024-12-13 05:52:18.874756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.991 [2024-12-13 05:52:18.874762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.991 [2024-12-13 05:52:18.874768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.991 [2024-12-13 05:52:18.874782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.991 qpair failed and we were unable to recover it. 00:36:18.991 [2024-12-13 05:52:18.884718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.991 [2024-12-13 05:52:18.884771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.991 [2024-12-13 05:52:18.884784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.991 [2024-12-13 05:52:18.884790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.991 [2024-12-13 05:52:18.884796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.991 [2024-12-13 05:52:18.884810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.991 qpair failed and we were unable to recover it. 00:36:18.991 [2024-12-13 05:52:18.894752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.991 [2024-12-13 05:52:18.894810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.991 [2024-12-13 05:52:18.894822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.991 [2024-12-13 05:52:18.894829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.991 [2024-12-13 05:52:18.894834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.991 [2024-12-13 05:52:18.894848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.991 qpair failed and we were unable to recover it. 00:36:18.991 [2024-12-13 05:52:18.904813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.991 [2024-12-13 05:52:18.904867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.991 [2024-12-13 05:52:18.904879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.991 [2024-12-13 05:52:18.904885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.991 [2024-12-13 05:52:18.904890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.991 [2024-12-13 05:52:18.904904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.991 qpair failed and we were unable to recover it. 00:36:18.991 [2024-12-13 05:52:18.914818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.991 [2024-12-13 05:52:18.914881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.991 [2024-12-13 05:52:18.914894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.991 [2024-12-13 05:52:18.914900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.991 [2024-12-13 05:52:18.914906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.991 [2024-12-13 05:52:18.914920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.991 qpair failed and we were unable to recover it. 00:36:18.991 [2024-12-13 05:52:18.924751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.991 [2024-12-13 05:52:18.924810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.991 [2024-12-13 05:52:18.924823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.991 [2024-12-13 05:52:18.924829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.991 [2024-12-13 05:52:18.924835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.991 [2024-12-13 05:52:18.924849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.991 qpair failed and we were unable to recover it. 00:36:18.991 [2024-12-13 05:52:18.934859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.991 [2024-12-13 05:52:18.934930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.991 [2024-12-13 05:52:18.934942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.991 [2024-12-13 05:52:18.934948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.991 [2024-12-13 05:52:18.934954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.991 [2024-12-13 05:52:18.934968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.991 qpair failed and we were unable to recover it. 00:36:18.991 [2024-12-13 05:52:18.944943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.991 [2024-12-13 05:52:18.944999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.991 [2024-12-13 05:52:18.945011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.991 [2024-12-13 05:52:18.945021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.991 [2024-12-13 05:52:18.945027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.991 [2024-12-13 05:52:18.945041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.991 qpair failed and we were unable to recover it. 00:36:18.991 [2024-12-13 05:52:18.954905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.991 [2024-12-13 05:52:18.954958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.991 [2024-12-13 05:52:18.954971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.991 [2024-12-13 05:52:18.954977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.991 [2024-12-13 05:52:18.954983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.991 [2024-12-13 05:52:18.954998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.991 qpair failed and we were unable to recover it. 00:36:18.991 [2024-12-13 05:52:18.964873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.991 [2024-12-13 05:52:18.964929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.991 [2024-12-13 05:52:18.964941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.991 [2024-12-13 05:52:18.964947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.991 [2024-12-13 05:52:18.964953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.991 [2024-12-13 05:52:18.964967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.991 qpair failed and we were unable to recover it. 00:36:18.991 [2024-12-13 05:52:18.974926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.991 [2024-12-13 05:52:18.975030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.991 [2024-12-13 05:52:18.975042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.991 [2024-12-13 05:52:18.975048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.991 [2024-12-13 05:52:18.975054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.991 [2024-12-13 05:52:18.975069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.991 qpair failed and we were unable to recover it. 00:36:18.991 [2024-12-13 05:52:18.984994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.991 [2024-12-13 05:52:18.985081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.991 [2024-12-13 05:52:18.985093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.991 [2024-12-13 05:52:18.985099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.991 [2024-12-13 05:52:18.985105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.991 [2024-12-13 05:52:18.985124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.991 qpair failed and we were unable to recover it. 00:36:18.992 [2024-12-13 05:52:18.994942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.992 [2024-12-13 05:52:18.994991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.992 [2024-12-13 05:52:18.995003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.992 [2024-12-13 05:52:18.995009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.992 [2024-12-13 05:52:18.995015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:18.992 [2024-12-13 05:52:18.995029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:18.992 qpair failed and we were unable to recover it. 00:36:19.250 [2024-12-13 05:52:19.005032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.251 [2024-12-13 05:52:19.005101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.251 [2024-12-13 05:52:19.005118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.251 [2024-12-13 05:52:19.005125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.251 [2024-12-13 05:52:19.005131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.251 [2024-12-13 05:52:19.005147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.251 qpair failed and we were unable to recover it. 00:36:19.251 [2024-12-13 05:52:19.015105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.251 [2024-12-13 05:52:19.015158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.251 [2024-12-13 05:52:19.015175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.251 [2024-12-13 05:52:19.015182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.251 [2024-12-13 05:52:19.015188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.251 [2024-12-13 05:52:19.015204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.251 qpair failed and we were unable to recover it. 00:36:19.251 [2024-12-13 05:52:19.025140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.251 [2024-12-13 05:52:19.025204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.251 [2024-12-13 05:52:19.025218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.251 [2024-12-13 05:52:19.025224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.251 [2024-12-13 05:52:19.025230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.251 [2024-12-13 05:52:19.025245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.251 qpair failed and we were unable to recover it. 00:36:19.251 [2024-12-13 05:52:19.035177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.251 [2024-12-13 05:52:19.035230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.251 [2024-12-13 05:52:19.035243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.251 [2024-12-13 05:52:19.035249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.251 [2024-12-13 05:52:19.035255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.251 [2024-12-13 05:52:19.035270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.251 qpair failed and we were unable to recover it. 00:36:19.251 [2024-12-13 05:52:19.045183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.251 [2024-12-13 05:52:19.045237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.251 [2024-12-13 05:52:19.045250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.251 [2024-12-13 05:52:19.045257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.251 [2024-12-13 05:52:19.045262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.251 [2024-12-13 05:52:19.045277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.251 qpair failed and we were unable to recover it. 00:36:19.251 [2024-12-13 05:52:19.055233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.251 [2024-12-13 05:52:19.055314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.251 [2024-12-13 05:52:19.055327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.251 [2024-12-13 05:52:19.055333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.251 [2024-12-13 05:52:19.055339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.251 [2024-12-13 05:52:19.055353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.251 qpair failed and we were unable to recover it. 00:36:19.251 [2024-12-13 05:52:19.065225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.251 [2024-12-13 05:52:19.065284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.251 [2024-12-13 05:52:19.065296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.251 [2024-12-13 05:52:19.065303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.251 [2024-12-13 05:52:19.065308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.251 [2024-12-13 05:52:19.065322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.251 qpair failed and we were unable to recover it. 00:36:19.251 [2024-12-13 05:52:19.075275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.251 [2024-12-13 05:52:19.075328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.251 [2024-12-13 05:52:19.075344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.251 [2024-12-13 05:52:19.075350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.251 [2024-12-13 05:52:19.075356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.251 [2024-12-13 05:52:19.075370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.251 qpair failed and we were unable to recover it. 00:36:19.251 [2024-12-13 05:52:19.085300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.251 [2024-12-13 05:52:19.085357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.251 [2024-12-13 05:52:19.085370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.251 [2024-12-13 05:52:19.085376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.251 [2024-12-13 05:52:19.085381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.251 [2024-12-13 05:52:19.085396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.251 qpair failed and we were unable to recover it. 00:36:19.251 [2024-12-13 05:52:19.095333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.251 [2024-12-13 05:52:19.095385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.251 [2024-12-13 05:52:19.095397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.251 [2024-12-13 05:52:19.095403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.251 [2024-12-13 05:52:19.095409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.251 [2024-12-13 05:52:19.095423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.251 qpair failed and we were unable to recover it. 00:36:19.251 [2024-12-13 05:52:19.105366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.251 [2024-12-13 05:52:19.105418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.251 [2024-12-13 05:52:19.105430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.251 [2024-12-13 05:52:19.105436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.251 [2024-12-13 05:52:19.105442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.251 [2024-12-13 05:52:19.105460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.251 qpair failed and we were unable to recover it. 00:36:19.251 [2024-12-13 05:52:19.115382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.251 [2024-12-13 05:52:19.115428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.251 [2024-12-13 05:52:19.115441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.251 [2024-12-13 05:52:19.115447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.251 [2024-12-13 05:52:19.115460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.251 [2024-12-13 05:52:19.115474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.251 qpair failed and we were unable to recover it. 00:36:19.251 [2024-12-13 05:52:19.125466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.251 [2024-12-13 05:52:19.125518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.251 [2024-12-13 05:52:19.125531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.251 [2024-12-13 05:52:19.125537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.251 [2024-12-13 05:52:19.125543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.251 [2024-12-13 05:52:19.125557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.251 qpair failed and we were unable to recover it. 00:36:19.252 [2024-12-13 05:52:19.135437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.252 [2024-12-13 05:52:19.135521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.252 [2024-12-13 05:52:19.135534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.252 [2024-12-13 05:52:19.135541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.252 [2024-12-13 05:52:19.135546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.252 [2024-12-13 05:52:19.135560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.252 qpair failed and we were unable to recover it. 00:36:19.252 [2024-12-13 05:52:19.145479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.252 [2024-12-13 05:52:19.145531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.252 [2024-12-13 05:52:19.145544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.252 [2024-12-13 05:52:19.145550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.252 [2024-12-13 05:52:19.145556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.252 [2024-12-13 05:52:19.145570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.252 qpair failed and we were unable to recover it. 00:36:19.252 [2024-12-13 05:52:19.155514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.252 [2024-12-13 05:52:19.155569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.252 [2024-12-13 05:52:19.155581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.252 [2024-12-13 05:52:19.155588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.252 [2024-12-13 05:52:19.155594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.252 [2024-12-13 05:52:19.155608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.252 qpair failed and we were unable to recover it. 00:36:19.252 [2024-12-13 05:52:19.165536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.252 [2024-12-13 05:52:19.165589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.252 [2024-12-13 05:52:19.165601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.252 [2024-12-13 05:52:19.165607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.252 [2024-12-13 05:52:19.165613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.252 [2024-12-13 05:52:19.165627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.252 qpair failed and we were unable to recover it. 00:36:19.252 [2024-12-13 05:52:19.175584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.252 [2024-12-13 05:52:19.175633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.252 [2024-12-13 05:52:19.175645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.252 [2024-12-13 05:52:19.175651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.252 [2024-12-13 05:52:19.175657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.252 [2024-12-13 05:52:19.175671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.252 qpair failed and we were unable to recover it. 00:36:19.252 [2024-12-13 05:52:19.185552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.252 [2024-12-13 05:52:19.185602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.252 [2024-12-13 05:52:19.185614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.252 [2024-12-13 05:52:19.185620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.252 [2024-12-13 05:52:19.185626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.252 [2024-12-13 05:52:19.185640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.252 qpair failed and we were unable to recover it. 00:36:19.252 [2024-12-13 05:52:19.195611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.252 [2024-12-13 05:52:19.195661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.252 [2024-12-13 05:52:19.195673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.252 [2024-12-13 05:52:19.195679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.252 [2024-12-13 05:52:19.195685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.252 [2024-12-13 05:52:19.195699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.252 qpair failed and we were unable to recover it. 00:36:19.252 [2024-12-13 05:52:19.205657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.252 [2024-12-13 05:52:19.205716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.252 [2024-12-13 05:52:19.205731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.252 [2024-12-13 05:52:19.205737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.252 [2024-12-13 05:52:19.205743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.252 [2024-12-13 05:52:19.205757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.252 qpair failed and we were unable to recover it. 00:36:19.252 [2024-12-13 05:52:19.215675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.252 [2024-12-13 05:52:19.215782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.252 [2024-12-13 05:52:19.215794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.252 [2024-12-13 05:52:19.215801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.252 [2024-12-13 05:52:19.215807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.252 [2024-12-13 05:52:19.215821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.252 qpair failed and we were unable to recover it. 00:36:19.252 [2024-12-13 05:52:19.225734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.252 [2024-12-13 05:52:19.225787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.252 [2024-12-13 05:52:19.225799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.252 [2024-12-13 05:52:19.225805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.252 [2024-12-13 05:52:19.225812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.252 [2024-12-13 05:52:19.225825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.252 qpair failed and we were unable to recover it. 00:36:19.252 [2024-12-13 05:52:19.235745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.252 [2024-12-13 05:52:19.235802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.252 [2024-12-13 05:52:19.235814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.252 [2024-12-13 05:52:19.235820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.252 [2024-12-13 05:52:19.235826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.252 [2024-12-13 05:52:19.235841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.252 qpair failed and we were unable to recover it. 00:36:19.252 [2024-12-13 05:52:19.245753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.252 [2024-12-13 05:52:19.245808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.252 [2024-12-13 05:52:19.245820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.252 [2024-12-13 05:52:19.245827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.252 [2024-12-13 05:52:19.245835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.252 [2024-12-13 05:52:19.245849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.252 qpair failed and we were unable to recover it. 00:36:19.252 [2024-12-13 05:52:19.255809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.252 [2024-12-13 05:52:19.255891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.252 [2024-12-13 05:52:19.255904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.252 [2024-12-13 05:52:19.255910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.252 [2024-12-13 05:52:19.255916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.252 [2024-12-13 05:52:19.255930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.252 qpair failed and we were unable to recover it. 00:36:19.511 [2024-12-13 05:52:19.265812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.511 [2024-12-13 05:52:19.265865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.511 [2024-12-13 05:52:19.265882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.511 [2024-12-13 05:52:19.265889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.511 [2024-12-13 05:52:19.265895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.511 [2024-12-13 05:52:19.265912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.511 qpair failed and we were unable to recover it. 00:36:19.511 [2024-12-13 05:52:19.275828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.511 [2024-12-13 05:52:19.275902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.511 [2024-12-13 05:52:19.275918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.511 [2024-12-13 05:52:19.275925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.511 [2024-12-13 05:52:19.275931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.511 [2024-12-13 05:52:19.275947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.511 qpair failed and we were unable to recover it. 00:36:19.511 [2024-12-13 05:52:19.285888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.511 [2024-12-13 05:52:19.285962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.511 [2024-12-13 05:52:19.285976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.511 [2024-12-13 05:52:19.285982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.511 [2024-12-13 05:52:19.285988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.511 [2024-12-13 05:52:19.286002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.511 qpair failed and we were unable to recover it. 00:36:19.511 [2024-12-13 05:52:19.295896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.511 [2024-12-13 05:52:19.295958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.511 [2024-12-13 05:52:19.295971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.511 [2024-12-13 05:52:19.295978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.511 [2024-12-13 05:52:19.295984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.511 [2024-12-13 05:52:19.295999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.511 qpair failed and we were unable to recover it. 00:36:19.511 [2024-12-13 05:52:19.305880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.511 [2024-12-13 05:52:19.305937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.511 [2024-12-13 05:52:19.305956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.511 [2024-12-13 05:52:19.305963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.511 [2024-12-13 05:52:19.305970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.511 [2024-12-13 05:52:19.305989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.511 qpair failed and we were unable to recover it. 00:36:19.511 [2024-12-13 05:52:19.315929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.511 [2024-12-13 05:52:19.315980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.511 [2024-12-13 05:52:19.315994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.511 [2024-12-13 05:52:19.316000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.511 [2024-12-13 05:52:19.316006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.511 [2024-12-13 05:52:19.316020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.511 qpair failed and we were unable to recover it. 00:36:19.511 [2024-12-13 05:52:19.325998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.511 [2024-12-13 05:52:19.326052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.511 [2024-12-13 05:52:19.326065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.511 [2024-12-13 05:52:19.326071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.511 [2024-12-13 05:52:19.326077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.511 [2024-12-13 05:52:19.326092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.511 qpair failed and we were unable to recover it. 00:36:19.511 [2024-12-13 05:52:19.336012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.511 [2024-12-13 05:52:19.336069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.511 [2024-12-13 05:52:19.336084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.511 [2024-12-13 05:52:19.336091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.511 [2024-12-13 05:52:19.336096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.512 [2024-12-13 05:52:19.336111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.512 qpair failed and we were unable to recover it. 00:36:19.512 [2024-12-13 05:52:19.345990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.512 [2024-12-13 05:52:19.346041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.512 [2024-12-13 05:52:19.346054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.512 [2024-12-13 05:52:19.346060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.512 [2024-12-13 05:52:19.346066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.512 [2024-12-13 05:52:19.346080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.512 qpair failed and we were unable to recover it. 00:36:19.512 [2024-12-13 05:52:19.356048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.512 [2024-12-13 05:52:19.356101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.512 [2024-12-13 05:52:19.356113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.512 [2024-12-13 05:52:19.356119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.512 [2024-12-13 05:52:19.356125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.512 [2024-12-13 05:52:19.356139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.512 qpair failed and we were unable to recover it. 00:36:19.512 [2024-12-13 05:52:19.366094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.512 [2024-12-13 05:52:19.366149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.512 [2024-12-13 05:52:19.366162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.512 [2024-12-13 05:52:19.366168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.512 [2024-12-13 05:52:19.366174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.512 [2024-12-13 05:52:19.366188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.512 qpair failed and we were unable to recover it. 00:36:19.512 [2024-12-13 05:52:19.376141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.512 [2024-12-13 05:52:19.376226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.512 [2024-12-13 05:52:19.376239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.512 [2024-12-13 05:52:19.376248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.512 [2024-12-13 05:52:19.376254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.512 [2024-12-13 05:52:19.376268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.512 qpair failed and we were unable to recover it. 00:36:19.512 [2024-12-13 05:52:19.386182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.512 [2024-12-13 05:52:19.386249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.512 [2024-12-13 05:52:19.386261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.512 [2024-12-13 05:52:19.386268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.512 [2024-12-13 05:52:19.386273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.512 [2024-12-13 05:52:19.386287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.512 qpair failed and we were unable to recover it. 00:36:19.512 [2024-12-13 05:52:19.396177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.512 [2024-12-13 05:52:19.396234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.512 [2024-12-13 05:52:19.396247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.512 [2024-12-13 05:52:19.396253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.512 [2024-12-13 05:52:19.396259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.512 [2024-12-13 05:52:19.396273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.512 qpair failed and we were unable to recover it. 00:36:19.512 [2024-12-13 05:52:19.406225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.512 [2024-12-13 05:52:19.406279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.512 [2024-12-13 05:52:19.406291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.512 [2024-12-13 05:52:19.406297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.512 [2024-12-13 05:52:19.406303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.512 [2024-12-13 05:52:19.406317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.512 qpair failed and we were unable to recover it. 00:36:19.512 [2024-12-13 05:52:19.416271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.512 [2024-12-13 05:52:19.416329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.512 [2024-12-13 05:52:19.416342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.512 [2024-12-13 05:52:19.416348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.512 [2024-12-13 05:52:19.416354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8618000b90 00:36:19.512 [2024-12-13 05:52:19.416371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.512 qpair failed and we were unable to recover it. 00:36:19.512 [2024-12-13 05:52:19.426357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.512 [2024-12-13 05:52:19.426501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.512 [2024-12-13 05:52:19.426556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.512 [2024-12-13 05:52:19.426580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.512 [2024-12-13 05:52:19.426601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8624000b90 00:36:19.512 [2024-12-13 05:52:19.426652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.512 qpair failed and we were unable to recover it. 00:36:19.512 [2024-12-13 05:52:19.436274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.512 [2024-12-13 05:52:19.436345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.512 [2024-12-13 05:52:19.436372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.512 [2024-12-13 05:52:19.436386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.512 [2024-12-13 05:52:19.436399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8624000b90 00:36:19.512 [2024-12-13 05:52:19.436429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.512 qpair failed and we were unable to recover it. 00:36:19.512 [2024-12-13 05:52:19.436491] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:19.512 A controller has encountered a failure and is being reset. 00:36:19.512 Controller properly reset. 00:36:19.512 Initializing NVMe Controllers 00:36:19.512 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:19.512 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:19.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:19.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:19.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:19.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:19.512 Initialization complete. Launching workers. 00:36:19.512 Starting thread on core 1 00:36:19.512 Starting thread on core 2 00:36:19.512 Starting thread on core 3 00:36:19.512 Starting thread on core 0 00:36:19.512 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:19.512 00:36:19.512 real 0m10.737s 00:36:19.512 user 0m19.194s 00:36:19.512 sys 0m4.459s 00:36:19.512 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:19.512 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:19.512 ************************************ 00:36:19.512 END TEST nvmf_target_disconnect_tc2 00:36:19.512 ************************************ 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:19.771 rmmod nvme_tcp 00:36:19.771 rmmod nvme_fabrics 00:36:19.771 rmmod nvme_keyring 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 542665 ']' 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 542665 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 542665 ']' 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 542665 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 542665 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 542665' 00:36:19.771 killing process with pid 542665 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 542665 00:36:19.771 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 542665 00:36:20.030 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:20.030 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:20.030 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:20.030 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:20.030 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:20.030 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:20.030 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:20.030 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:20.030 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:20.030 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:20.030 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:20.030 05:52:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.935 05:52:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:21.935 00:36:21.935 real 0m19.604s 00:36:21.935 user 0m46.973s 00:36:21.935 sys 0m9.316s 00:36:21.935 05:52:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:21.935 05:52:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:21.935 ************************************ 00:36:21.935 END TEST nvmf_target_disconnect 00:36:21.935 ************************************ 00:36:22.195 05:52:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:22.195 00:36:22.195 real 7m23.057s 00:36:22.195 user 16m50.723s 00:36:22.195 sys 2m8.000s 00:36:22.195 05:52:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:22.195 05:52:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.195 ************************************ 00:36:22.195 END TEST nvmf_host 00:36:22.195 ************************************ 00:36:22.195 05:52:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:22.195 05:52:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:22.195 05:52:22 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:22.195 05:52:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:22.195 05:52:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:22.195 05:52:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:22.195 ************************************ 00:36:22.195 START TEST nvmf_target_core_interrupt_mode 00:36:22.195 ************************************ 00:36:22.195 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:22.195 * Looking for test storage... 00:36:22.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:22.195 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:22.195 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:36:22.195 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:22.195 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:22.195 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:22.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.454 --rc genhtml_branch_coverage=1 00:36:22.454 --rc genhtml_function_coverage=1 00:36:22.454 --rc genhtml_legend=1 00:36:22.454 --rc geninfo_all_blocks=1 00:36:22.454 --rc geninfo_unexecuted_blocks=1 00:36:22.454 00:36:22.454 ' 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:22.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.454 --rc genhtml_branch_coverage=1 00:36:22.454 --rc genhtml_function_coverage=1 00:36:22.454 --rc genhtml_legend=1 00:36:22.454 --rc geninfo_all_blocks=1 00:36:22.454 --rc geninfo_unexecuted_blocks=1 00:36:22.454 00:36:22.454 ' 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:22.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.454 --rc genhtml_branch_coverage=1 00:36:22.454 --rc genhtml_function_coverage=1 00:36:22.454 --rc genhtml_legend=1 00:36:22.454 --rc geninfo_all_blocks=1 00:36:22.454 --rc geninfo_unexecuted_blocks=1 00:36:22.454 00:36:22.454 ' 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:22.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.454 --rc genhtml_branch_coverage=1 00:36:22.454 --rc genhtml_function_coverage=1 00:36:22.454 --rc genhtml_legend=1 00:36:22.454 --rc geninfo_all_blocks=1 00:36:22.454 --rc geninfo_unexecuted_blocks=1 00:36:22.454 00:36:22.454 ' 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.454 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:22.455 ************************************ 00:36:22.455 START TEST nvmf_abort 00:36:22.455 ************************************ 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:22.455 * Looking for test storage... 00:36:22.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:22.455 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:22.714 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:22.714 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:22.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.715 --rc genhtml_branch_coverage=1 00:36:22.715 --rc genhtml_function_coverage=1 00:36:22.715 --rc genhtml_legend=1 00:36:22.715 --rc geninfo_all_blocks=1 00:36:22.715 --rc geninfo_unexecuted_blocks=1 00:36:22.715 00:36:22.715 ' 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:22.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.715 --rc genhtml_branch_coverage=1 00:36:22.715 --rc genhtml_function_coverage=1 00:36:22.715 --rc genhtml_legend=1 00:36:22.715 --rc geninfo_all_blocks=1 00:36:22.715 --rc geninfo_unexecuted_blocks=1 00:36:22.715 00:36:22.715 ' 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:22.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.715 --rc genhtml_branch_coverage=1 00:36:22.715 --rc genhtml_function_coverage=1 00:36:22.715 --rc genhtml_legend=1 00:36:22.715 --rc geninfo_all_blocks=1 00:36:22.715 --rc geninfo_unexecuted_blocks=1 00:36:22.715 00:36:22.715 ' 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:22.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.715 --rc genhtml_branch_coverage=1 00:36:22.715 --rc genhtml_function_coverage=1 00:36:22.715 --rc genhtml_legend=1 00:36:22.715 --rc geninfo_all_blocks=1 00:36:22.715 --rc geninfo_unexecuted_blocks=1 00:36:22.715 00:36:22.715 ' 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:22.715 05:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:29.290 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:29.290 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:29.290 Found net devices under 0000:af:00.0: cvl_0_0 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:29.290 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:29.291 Found net devices under 0000:af:00.1: cvl_0_1 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:29.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:29.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:36:29.291 00:36:29.291 --- 10.0.0.2 ping statistics --- 00:36:29.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.291 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:29.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:29.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:36:29.291 00:36:29.291 --- 10.0.0.1 ping statistics --- 00:36:29.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.291 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=547346 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 547346 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 547346 ']' 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:29.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.291 [2024-12-13 05:52:28.420964] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:29.291 [2024-12-13 05:52:28.421868] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:29.291 [2024-12-13 05:52:28.421906] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:29.291 [2024-12-13 05:52:28.500695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:29.291 [2024-12-13 05:52:28.522218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:29.291 [2024-12-13 05:52:28.522253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:29.291 [2024-12-13 05:52:28.522259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:29.291 [2024-12-13 05:52:28.522265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:29.291 [2024-12-13 05:52:28.522271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:29.291 [2024-12-13 05:52:28.523531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:29.291 [2024-12-13 05:52:28.523636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.291 [2024-12-13 05:52:28.523636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:29.291 [2024-12-13 05:52:28.585117] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:29.291 [2024-12-13 05:52:28.585891] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:29.291 [2024-12-13 05:52:28.586344] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:29.291 [2024-12-13 05:52:28.586440] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:29.291 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.292 [2024-12-13 05:52:28.648422] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.292 Malloc0 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.292 Delay0 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.292 [2024-12-13 05:52:28.740360] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.292 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:29.292 [2024-12-13 05:52:28.864601] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:31.197 Initializing NVMe Controllers 00:36:31.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:31.197 controller IO queue size 128 less than required 00:36:31.197 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:31.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:31.197 Initialization complete. Launching workers. 00:36:31.197 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37685 00:36:31.197 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37746, failed to submit 66 00:36:31.197 success 37685, unsuccessful 61, failed 0 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:31.198 rmmod nvme_tcp 00:36:31.198 rmmod nvme_fabrics 00:36:31.198 rmmod nvme_keyring 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 547346 ']' 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 547346 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 547346 ']' 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 547346 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:31.198 05:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 547346 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 547346' 00:36:31.198 killing process with pid 547346 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 547346 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 547346 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:31.198 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:33.735 00:36:33.735 real 0m10.992s 00:36:33.735 user 0m10.070s 00:36:33.735 sys 0m5.639s 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.735 ************************************ 00:36:33.735 END TEST nvmf_abort 00:36:33.735 ************************************ 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:33.735 ************************************ 00:36:33.735 START TEST nvmf_ns_hotplug_stress 00:36:33.735 ************************************ 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:33.735 * Looking for test storage... 00:36:33.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:33.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.735 --rc genhtml_branch_coverage=1 00:36:33.735 --rc genhtml_function_coverage=1 00:36:33.735 --rc genhtml_legend=1 00:36:33.735 --rc geninfo_all_blocks=1 00:36:33.735 --rc geninfo_unexecuted_blocks=1 00:36:33.735 00:36:33.735 ' 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:33.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.735 --rc genhtml_branch_coverage=1 00:36:33.735 --rc genhtml_function_coverage=1 00:36:33.735 --rc genhtml_legend=1 00:36:33.735 --rc geninfo_all_blocks=1 00:36:33.735 --rc geninfo_unexecuted_blocks=1 00:36:33.735 00:36:33.735 ' 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:33.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.735 --rc genhtml_branch_coverage=1 00:36:33.735 --rc genhtml_function_coverage=1 00:36:33.735 --rc genhtml_legend=1 00:36:33.735 --rc geninfo_all_blocks=1 00:36:33.735 --rc geninfo_unexecuted_blocks=1 00:36:33.735 00:36:33.735 ' 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:33.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.735 --rc genhtml_branch_coverage=1 00:36:33.735 --rc genhtml_function_coverage=1 00:36:33.735 --rc genhtml_legend=1 00:36:33.735 --rc geninfo_all_blocks=1 00:36:33.735 --rc geninfo_unexecuted_blocks=1 00:36:33.735 00:36:33.735 ' 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.735 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:33.736 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:40.308 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:40.308 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:40.308 Found net devices under 0000:af:00.0: cvl_0_0 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:40.308 Found net devices under 0000:af:00.1: cvl_0_1 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:40.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:40.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:36:40.308 00:36:40.308 --- 10.0.0.2 ping statistics --- 00:36:40.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.308 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:40.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:40.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:36:40.308 00:36:40.308 --- 10.0.0.1 ping statistics --- 00:36:40.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.308 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:40.308 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=551068 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 551068 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 551068 ']' 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:40.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:40.309 [2024-12-13 05:52:39.563084] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:40.309 [2024-12-13 05:52:39.564050] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:40.309 [2024-12-13 05:52:39.564086] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:40.309 [2024-12-13 05:52:39.645385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:40.309 [2024-12-13 05:52:39.667126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:40.309 [2024-12-13 05:52:39.667161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:40.309 [2024-12-13 05:52:39.667168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:40.309 [2024-12-13 05:52:39.667173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:40.309 [2024-12-13 05:52:39.667178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:40.309 [2024-12-13 05:52:39.668428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:40.309 [2024-12-13 05:52:39.668518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:40.309 [2024-12-13 05:52:39.668519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:40.309 [2024-12-13 05:52:39.730494] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:40.309 [2024-12-13 05:52:39.731440] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:40.309 [2024-12-13 05:52:39.731570] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:40.309 [2024-12-13 05:52:39.731731] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:40.309 [2024-12-13 05:52:39.965282] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:40.309 05:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:40.309 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:40.570 [2024-12-13 05:52:40.361684] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.570 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:40.570 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:40.828 Malloc0 00:36:40.828 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:41.087 Delay0 00:36:41.087 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.417 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:41.417 NULL1 00:36:41.417 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:41.676 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=551528 00:36:41.676 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:41.676 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:41.676 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.053 Read completed with error (sct=0, sc=11) 00:36:43.053 05:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:43.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:43.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:43.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:43.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:43.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:43.053 05:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:43.053 05:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:43.311 true 00:36:43.311 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:43.311 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.246 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.246 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:44.246 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:44.504 true 00:36:44.504 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:44.504 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.763 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.763 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:44.763 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:45.021 true 00:36:45.021 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:45.021 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.966 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:46.224 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:46.224 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:46.483 true 00:36:46.483 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:46.483 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.741 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.741 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:46.741 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:46.999 true 00:36:46.999 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:46.999 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:48.376 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:48.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:48.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:48.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:48.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:48.376 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:48.376 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:48.634 true 00:36:48.634 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:48.634 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.570 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.570 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:49.570 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:49.829 true 00:36:49.829 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:49.829 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.087 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.346 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:50.346 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:50.346 true 00:36:50.605 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:50.605 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.540 05:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.798 05:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:51.798 05:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:51.798 true 00:36:51.798 05:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:51.798 05:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.056 05:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.315 05:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:52.315 05:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:52.315 true 00:36:52.315 05:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:52.574 05:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:53.509 05:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:53.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:53.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:53.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:53.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:53.767 05:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:53.767 05:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:54.026 true 00:36:54.026 05:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:54.026 05:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.962 05:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.962 05:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:54.962 05:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:55.223 true 00:36:55.223 05:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:55.223 05:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.483 05:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.741 05:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:55.741 05:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:55.741 true 00:36:55.742 05:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:55.742 05:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.117 05:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.117 05:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:57.117 05:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:57.375 true 00:36:57.375 05:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:57.375 05:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.375 05:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.634 05:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:57.634 05:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:57.892 true 00:36:57.892 05:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:57.892 05:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.828 05:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:58.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.086 05:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:59.086 05:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:59.344 true 00:36:59.344 05:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:36:59.345 05:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:00.280 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:00.280 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:37:00.280 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:37:00.539 true 00:37:00.539 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:37:00.539 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:00.797 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:01.055 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:37:01.055 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:37:01.055 true 00:37:01.055 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:37:01.055 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.432 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.432 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.432 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.432 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.432 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.432 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.432 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.432 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.432 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:37:02.432 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:37:02.700 true 00:37:02.700 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:37:02.700 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:03.635 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:03.635 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:37:03.635 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:03.893 true 00:37:03.893 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:37:03.893 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.152 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.152 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:04.152 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:04.411 true 00:37:04.411 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:37:04.411 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.791 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:05.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.791 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:05.791 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:06.050 true 00:37:06.050 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:37:06.050 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:06.986 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:06.986 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:06.986 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:07.244 true 00:37:07.244 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:37:07.245 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:07.503 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:07.503 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:07.503 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:07.762 true 00:37:07.762 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:37:07.762 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.697 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:08.956 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:08.956 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:09.214 true 00:37:09.214 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:37:09.214 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.473 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:09.473 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:09.473 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:09.732 true 00:37:09.732 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:37:09.732 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:10.926 05:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:10.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:10.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:10.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:10.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:10.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:10.926 05:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:10.926 05:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:11.185 true 00:37:11.185 05:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:37:11.185 05:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:12.121 Initializing NVMe Controllers 00:37:12.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:12.121 Controller IO queue size 128, less than required. 00:37:12.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:12.121 Controller IO queue size 128, less than required. 00:37:12.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:12.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:12.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:12.121 Initialization complete. Launching workers. 00:37:12.121 ======================================================== 00:37:12.121 Latency(us) 00:37:12.121 Device Information : IOPS MiB/s Average min max 00:37:12.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1773.39 0.87 49092.32 2727.50 1013040.32 00:37:12.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17993.32 8.79 7113.34 1984.23 447837.02 00:37:12.121 ======================================================== 00:37:12.121 Total : 19766.70 9.65 10879.52 1984.23 1013040.32 00:37:12.121 00:37:12.121 05:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:12.121 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:12.121 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:12.380 true 00:37:12.380 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551528 00:37:12.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (551528) - No such process 00:37:12.380 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 551528 00:37:12.380 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:12.639 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:12.897 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:12.897 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:12.897 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:12.897 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:12.897 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:12.897 null0 00:37:13.156 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.156 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.156 05:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:13.156 null1 00:37:13.156 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.156 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.156 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:13.414 null2 00:37:13.414 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.414 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.414 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:13.673 null3 00:37:13.673 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.673 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.673 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:13.673 null4 00:37:13.673 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.673 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.673 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:13.931 null5 00:37:13.932 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:13.932 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:13.932 05:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:14.190 null6 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:14.190 null7 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.190 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 556622 556625 556627 556630 556633 556636 556638 556641 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:14.449 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.708 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.709 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:14.709 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.709 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:14.709 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.709 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.709 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:14.709 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.709 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.709 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:14.967 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.967 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:14.967 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:14.967 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:14.967 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:14.967 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:14.967 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:14.967 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:15.226 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:15.227 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:15.227 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:15.227 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:15.485 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:15.485 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.485 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.485 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:15.485 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.485 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.485 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:15.485 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.485 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.485 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:15.485 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.485 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.486 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:15.486 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.486 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.486 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:15.486 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.486 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.486 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:15.486 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.486 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.486 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:15.486 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:15.486 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:15.486 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:15.745 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:15.745 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:15.745 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:15.745 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:15.745 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:15.745 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:15.745 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:15.745 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.004 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:16.262 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:16.262 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:16.262 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.263 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:16.522 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.781 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:17.040 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:17.040 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:17.040 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:17.040 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:17.040 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.040 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:17.040 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:17.040 05:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:17.299 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.299 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.299 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:17.299 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.299 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.299 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:17.299 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:17.300 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.559 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:17.818 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:17.818 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:17.818 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:17.818 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:17.818 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.818 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:17.818 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:17.818 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.077 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:18.336 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:18.336 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:18.336 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:18.336 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:18.336 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:18.336 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:18.336 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:18.336 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:18.336 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.336 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.595 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.595 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.595 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.595 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.595 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.595 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.596 rmmod nvme_tcp 00:37:18.596 rmmod nvme_fabrics 00:37:18.596 rmmod nvme_keyring 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 551068 ']' 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 551068 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 551068 ']' 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 551068 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551068 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551068' 00:37:18.596 killing process with pid 551068 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 551068 00:37:18.596 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 551068 00:37:18.855 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:18.855 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:18.855 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:18.855 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:18.855 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:18.855 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:18.855 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:18.855 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:18.855 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:18.855 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.855 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:18.855 05:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.759 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:20.759 00:37:20.759 real 0m47.377s 00:37:20.760 user 2m56.856s 00:37:20.760 sys 0m19.316s 00:37:20.760 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:20.760 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:20.760 ************************************ 00:37:20.760 END TEST nvmf_ns_hotplug_stress 00:37:20.760 ************************************ 00:37:20.760 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:21.020 ************************************ 00:37:21.020 START TEST nvmf_delete_subsystem 00:37:21.020 ************************************ 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:21.020 * Looking for test storage... 00:37:21.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:21.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.020 --rc genhtml_branch_coverage=1 00:37:21.020 --rc genhtml_function_coverage=1 00:37:21.020 --rc genhtml_legend=1 00:37:21.020 --rc geninfo_all_blocks=1 00:37:21.020 --rc geninfo_unexecuted_blocks=1 00:37:21.020 00:37:21.020 ' 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:21.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.020 --rc genhtml_branch_coverage=1 00:37:21.020 --rc genhtml_function_coverage=1 00:37:21.020 --rc genhtml_legend=1 00:37:21.020 --rc geninfo_all_blocks=1 00:37:21.020 --rc geninfo_unexecuted_blocks=1 00:37:21.020 00:37:21.020 ' 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:21.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.020 --rc genhtml_branch_coverage=1 00:37:21.020 --rc genhtml_function_coverage=1 00:37:21.020 --rc genhtml_legend=1 00:37:21.020 --rc geninfo_all_blocks=1 00:37:21.020 --rc geninfo_unexecuted_blocks=1 00:37:21.020 00:37:21.020 ' 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:21.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.020 --rc genhtml_branch_coverage=1 00:37:21.020 --rc genhtml_function_coverage=1 00:37:21.020 --rc genhtml_legend=1 00:37:21.020 --rc geninfo_all_blocks=1 00:37:21.020 --rc geninfo_unexecuted_blocks=1 00:37:21.020 00:37:21.020 ' 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:21.020 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:21.021 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:21.021 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:21.021 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:21.021 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:27.600 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:27.600 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:27.600 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:27.601 Found net devices under 0000:af:00.0: cvl_0_0 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:27.601 Found net devices under 0000:af:00.1: cvl_0_1 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:27.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:27.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:37:27.601 00:37:27.601 --- 10.0.0.2 ping statistics --- 00:37:27.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:27.601 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:27.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:27.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:37:27.601 00:37:27.601 --- 10.0.0.1 ping statistics --- 00:37:27.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:27.601 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=560793 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 560793 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 560793 ']' 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:27.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:27.601 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.601 [2024-12-13 05:53:26.902202] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:27.601 [2024-12-13 05:53:26.903187] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:27.601 [2024-12-13 05:53:26.903229] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:27.601 [2024-12-13 05:53:26.981896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:27.601 [2024-12-13 05:53:27.004209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:27.601 [2024-12-13 05:53:27.004244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:27.601 [2024-12-13 05:53:27.004251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:27.601 [2024-12-13 05:53:27.004257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:27.601 [2024-12-13 05:53:27.004261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:27.601 [2024-12-13 05:53:27.005406] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.601 [2024-12-13 05:53:27.005407] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:27.601 [2024-12-13 05:53:27.069466] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:27.601 [2024-12-13 05:53:27.070068] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:27.601 [2024-12-13 05:53:27.070255] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:27.601 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:27.601 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:27.601 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:27.601 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:27.601 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.601 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:27.601 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:27.601 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.601 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.601 [2024-12-13 05:53:27.138188] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:27.601 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.602 [2024-12-13 05:53:27.166579] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.602 NULL1 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.602 Delay0 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=560912 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:27.602 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:27.602 [2024-12-13 05:53:27.277025] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:29.501 05:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:29.501 05:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.501 05:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 starting I/O failed: -6 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Write completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 starting I/O failed: -6 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Write completed with error (sct=0, sc=8) 00:37:29.501 starting I/O failed: -6 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Write completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 starting I/O failed: -6 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 starting I/O failed: -6 00:37:29.501 Write completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 starting I/O failed: -6 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Write completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 starting I/O failed: -6 00:37:29.501 Write completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.501 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 starting I/O failed: -6 00:37:29.502 starting I/O failed: -6 00:37:29.502 starting I/O failed: -6 00:37:29.502 starting I/O failed: -6 00:37:29.502 starting I/O failed: -6 00:37:29.502 starting I/O failed: -6 00:37:29.502 starting I/O failed: -6 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 starting I/O failed: -6 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 [2024-12-13 05:53:29.454403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72ec000c80 is same with the state(6) to be set 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:29.502 Write completed with error (sct=0, sc=8) 00:37:29.502 Read completed with error (sct=0, sc=8) 00:37:30.435 [2024-12-13 05:53:30.413620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082260 is same with the state(6) to be set 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 [2024-12-13 05:53:30.458599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085140 is same with the state(6) to be set 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 [2024-12-13 05:53:30.458709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72ec00d800 is same with the state(6) to be set 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 [2024-12-13 05:53:30.458894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9920 is same with the state(6) to be set 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Write completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 Read completed with error (sct=0, sc=8) 00:37:30.693 [2024-12-13 05:53:30.459528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f72ec00d060 is same with the state(6) to be set 00:37:30.693 Initializing NVMe Controllers 00:37:30.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:30.693 Controller IO queue size 128, less than required. 00:37:30.693 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:30.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:30.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:30.693 Initialization complete. Launching workers. 00:37:30.693 ======================================================== 00:37:30.693 Latency(us) 00:37:30.693 Device Information : IOPS MiB/s Average min max 00:37:30.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 181.69 0.09 915176.39 368.64 1013616.51 00:37:30.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.38 0.08 929665.02 255.56 1013940.26 00:37:30.693 ======================================================== 00:37:30.693 Total : 337.06 0.16 921855.24 255.56 1013940.26 00:37:30.693 00:37:30.693 [2024-12-13 05:53:30.460328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082260 (9): Bad file descriptor 00:37:30.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:30.693 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.693 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:30.693 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 560912 00:37:30.693 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 560912 00:37:31.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (560912) - No such process 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 560912 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 560912 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 560912 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:31.260 [2024-12-13 05:53:30.994434] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.260 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:31.260 05:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.260 05:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=561482 00:37:31.260 05:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:31.260 05:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:31.260 05:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561482 00:37:31.260 05:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:31.260 [2024-12-13 05:53:31.073357] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:31.517 05:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:31.517 05:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561482 00:37:31.517 05:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:32.083 05:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:32.083 05:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561482 00:37:32.083 05:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:32.648 05:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:32.648 05:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561482 00:37:32.648 05:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:33.214 05:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:33.214 05:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561482 00:37:33.214 05:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:33.779 05:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:33.779 05:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561482 00:37:33.779 05:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:34.037 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:34.037 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561482 00:37:34.037 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:34.296 Initializing NVMe Controllers 00:37:34.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:34.296 Controller IO queue size 128, less than required. 00:37:34.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:34.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:34.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:34.296 Initialization complete. Launching workers. 00:37:34.296 ======================================================== 00:37:34.296 Latency(us) 00:37:34.296 Device Information : IOPS MiB/s Average min max 00:37:34.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002608.13 1000141.54 1042988.41 00:37:34.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004888.35 1000163.94 1041390.22 00:37:34.296 ======================================================== 00:37:34.296 Total : 256.00 0.12 1003748.24 1000141.54 1042988.41 00:37:34.296 00:37:34.555 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:34.555 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561482 00:37:34.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (561482) - No such process 00:37:34.555 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 561482 00:37:34.555 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:34.555 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:34.555 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:34.555 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:34.555 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:34.555 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:34.555 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:34.555 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:34.555 rmmod nvme_tcp 00:37:34.555 rmmod nvme_fabrics 00:37:34.814 rmmod nvme_keyring 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 560793 ']' 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 560793 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 560793 ']' 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 560793 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 560793 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 560793' 00:37:34.814 killing process with pid 560793 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 560793 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 560793 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:34.814 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:35.073 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:35.073 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:35.073 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:35.073 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:35.073 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.977 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:36.977 00:37:36.977 real 0m16.089s 00:37:36.977 user 0m26.258s 00:37:36.977 sys 0m5.994s 00:37:36.977 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:36.977 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:36.977 ************************************ 00:37:36.977 END TEST nvmf_delete_subsystem 00:37:36.977 ************************************ 00:37:36.977 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:36.977 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:36.977 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:36.977 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:36.977 ************************************ 00:37:36.977 START TEST nvmf_host_management 00:37:36.977 ************************************ 00:37:36.977 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:37.237 * Looking for test storage... 00:37:37.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:37.237 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:37.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.238 --rc genhtml_branch_coverage=1 00:37:37.238 --rc genhtml_function_coverage=1 00:37:37.238 --rc genhtml_legend=1 00:37:37.238 --rc geninfo_all_blocks=1 00:37:37.238 --rc geninfo_unexecuted_blocks=1 00:37:37.238 00:37:37.238 ' 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:37.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.238 --rc genhtml_branch_coverage=1 00:37:37.238 --rc genhtml_function_coverage=1 00:37:37.238 --rc genhtml_legend=1 00:37:37.238 --rc geninfo_all_blocks=1 00:37:37.238 --rc geninfo_unexecuted_blocks=1 00:37:37.238 00:37:37.238 ' 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:37.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.238 --rc genhtml_branch_coverage=1 00:37:37.238 --rc genhtml_function_coverage=1 00:37:37.238 --rc genhtml_legend=1 00:37:37.238 --rc geninfo_all_blocks=1 00:37:37.238 --rc geninfo_unexecuted_blocks=1 00:37:37.238 00:37:37.238 ' 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:37.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.238 --rc genhtml_branch_coverage=1 00:37:37.238 --rc genhtml_function_coverage=1 00:37:37.238 --rc genhtml_legend=1 00:37:37.238 --rc geninfo_all_blocks=1 00:37:37.238 --rc geninfo_unexecuted_blocks=1 00:37:37.238 00:37:37.238 ' 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:37.238 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:37.239 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:43.888 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:43.888 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:43.888 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:43.889 Found net devices under 0000:af:00.0: cvl_0_0 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:43.889 Found net devices under 0000:af:00.1: cvl_0_1 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:43.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:43.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:37:43.889 00:37:43.889 --- 10.0.0.2 ping statistics --- 00:37:43.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.889 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:43.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:43.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:37:43.889 00:37:43.889 --- 10.0.0.1 ping statistics --- 00:37:43.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.889 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:43.889 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=565440 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 565440 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 565440 ']' 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:43.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:43.889 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.889 [2024-12-13 05:53:43.076461] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:43.889 [2024-12-13 05:53:43.077321] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:43.889 [2024-12-13 05:53:43.077353] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:43.889 [2024-12-13 05:53:43.153497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:43.889 [2024-12-13 05:53:43.177008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:43.889 [2024-12-13 05:53:43.177044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:43.889 [2024-12-13 05:53:43.177050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:43.889 [2024-12-13 05:53:43.177056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:43.889 [2024-12-13 05:53:43.177061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:43.889 [2024-12-13 05:53:43.178409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:43.889 [2024-12-13 05:53:43.178519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:43.890 [2024-12-13 05:53:43.178626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:37:43.890 [2024-12-13 05:53:43.178625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:43.890 [2024-12-13 05:53:43.241384] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:43.890 [2024-12-13 05:53:43.242493] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:43.890 [2024-12-13 05:53:43.242993] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:43.890 [2024-12-13 05:53:43.243135] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:43.890 [2024-12-13 05:53:43.243188] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 [2024-12-13 05:53:43.323407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 Malloc0 00:37:43.890 [2024-12-13 05:53:43.411524] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=565656 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 565656 /var/tmp/bdevperf.sock 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 565656 ']' 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:43.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:43.890 { 00:37:43.890 "params": { 00:37:43.890 "name": "Nvme$subsystem", 00:37:43.890 "trtype": "$TEST_TRANSPORT", 00:37:43.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:43.890 "adrfam": "ipv4", 00:37:43.890 "trsvcid": "$NVMF_PORT", 00:37:43.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:43.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:43.890 "hdgst": ${hdgst:-false}, 00:37:43.890 "ddgst": ${ddgst:-false} 00:37:43.890 }, 00:37:43.890 "method": "bdev_nvme_attach_controller" 00:37:43.890 } 00:37:43.890 EOF 00:37:43.890 )") 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:43.890 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:43.890 "params": { 00:37:43.890 "name": "Nvme0", 00:37:43.890 "trtype": "tcp", 00:37:43.890 "traddr": "10.0.0.2", 00:37:43.890 "adrfam": "ipv4", 00:37:43.890 "trsvcid": "4420", 00:37:43.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:43.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:43.890 "hdgst": false, 00:37:43.890 "ddgst": false 00:37:43.890 }, 00:37:43.890 "method": "bdev_nvme_attach_controller" 00:37:43.890 }' 00:37:43.890 [2024-12-13 05:53:43.506161] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:43.890 [2024-12-13 05:53:43.506209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565656 ] 00:37:43.890 [2024-12-13 05:53:43.579244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.890 [2024-12-13 05:53:43.601610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.890 Running I/O for 10 seconds... 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=99 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 99 -ge 100 ']' 00:37:44.163 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.432 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:44.432 [2024-12-13 05:53:44.275497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.432 [2024-12-13 05:53:44.275832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.432 [2024-12-13 05:53:44.275838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.275845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.275852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.275859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.275866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.275873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.275880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.275887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.275893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.275901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.275908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.275915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.275922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.275929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.275936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.275943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.275950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.275957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.275963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.275972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.275979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.275988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.275994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.433 [2024-12-13 05:53:44.276405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.433 [2024-12-13 05:53:44.276411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.434 [2024-12-13 05:53:44.276418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.434 [2024-12-13 05:53:44.276425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.434 [2024-12-13 05:53:44.276433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.434 [2024-12-13 05:53:44.276439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.434 [2024-12-13 05:53:44.276452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.434 [2024-12-13 05:53:44.276459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.434 [2024-12-13 05:53:44.277417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:44.434 task offset: 104960 on job bdev=Nvme0n1 fails 00:37:44.434 00:37:44.434 Latency(us) 00:37:44.434 [2024-12-13T04:53:44.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.434 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:44.434 Job: Nvme0n1 ended in about 0.40 seconds with error 00:37:44.434 Verification LBA range: start 0x0 length 0x400 00:37:44.434 Nvme0n1 : 0.40 1903.66 118.98 158.64 0.00 30212.45 1443.35 26713.72 00:37:44.434 [2024-12-13T04:53:44.449Z] =================================================================================================================== 00:37:44.434 [2024-12-13T04:53:44.449Z] Total : 1903.66 118.98 158.64 0.00 30212.45 1443.35 26713.72 00:37:44.434 [2024-12-13 05:53:44.279777] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:44.434 [2024-12-13 05:53:44.279798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1325d40 (9): Bad file descriptor 00:37:44.434 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.434 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:44.434 [2024-12-13 05:53:44.280695] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:44.434 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.434 [2024-12-13 05:53:44.280777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:44.434 [2024-12-13 05:53:44.280798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:44.434 [2024-12-13 05:53:44.280811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:44.434 [2024-12-13 05:53:44.280818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:44.434 [2024-12-13 05:53:44.280825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.434 [2024-12-13 05:53:44.280832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1325d40 00:37:44.434 [2024-12-13 05:53:44.280850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1325d40 (9): Bad file descriptor 00:37:44.434 [2024-12-13 05:53:44.280861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:44.434 [2024-12-13 05:53:44.280867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:44.434 [2024-12-13 05:53:44.280874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:44.434 [2024-12-13 05:53:44.280882] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:44.434 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:44.434 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.434 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:45.428 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 565656 00:37:45.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (565656) - No such process 00:37:45.428 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:45.428 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:45.428 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:45.428 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:45.428 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:45.428 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:45.428 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:45.428 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:45.428 { 00:37:45.428 "params": { 00:37:45.428 "name": "Nvme$subsystem", 00:37:45.428 "trtype": "$TEST_TRANSPORT", 00:37:45.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:45.428 "adrfam": "ipv4", 00:37:45.428 "trsvcid": "$NVMF_PORT", 00:37:45.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:45.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:45.428 "hdgst": ${hdgst:-false}, 00:37:45.428 "ddgst": ${ddgst:-false} 00:37:45.428 }, 00:37:45.428 "method": "bdev_nvme_attach_controller" 00:37:45.428 } 00:37:45.428 EOF 00:37:45.428 )") 00:37:45.428 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:45.428 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:45.428 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:45.428 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:45.428 "params": { 00:37:45.428 "name": "Nvme0", 00:37:45.428 "trtype": "tcp", 00:37:45.428 "traddr": "10.0.0.2", 00:37:45.428 "adrfam": "ipv4", 00:37:45.428 "trsvcid": "4420", 00:37:45.428 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:45.428 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:45.428 "hdgst": false, 00:37:45.428 "ddgst": false 00:37:45.428 }, 00:37:45.428 "method": "bdev_nvme_attach_controller" 00:37:45.428 }' 00:37:45.428 [2024-12-13 05:53:45.344580] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:45.428 [2024-12-13 05:53:45.344630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565909 ] 00:37:45.428 [2024-12-13 05:53:45.417593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.428 [2024-12-13 05:53:45.439311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:45.993 Running I/O for 1 seconds... 00:37:46.925 2014.00 IOPS, 125.88 MiB/s 00:37:46.925 Latency(us) 00:37:46.925 [2024-12-13T04:53:46.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.925 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:46.925 Verification LBA range: start 0x0 length 0x400 00:37:46.925 Nvme0n1 : 1.05 1976.45 123.53 0.00 0.00 30607.94 3698.10 44689.31 00:37:46.925 [2024-12-13T04:53:46.940Z] =================================================================================================================== 00:37:46.925 [2024-12-13T04:53:46.940Z] Total : 1976.45 123.53 0.00 0.00 30607.94 3698.10 44689.31 00:37:47.184 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:47.184 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:47.184 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:47.184 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:47.184 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:47.184 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:47.184 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:47.184 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:47.184 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:47.184 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:47.184 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:47.184 rmmod nvme_tcp 00:37:47.184 rmmod nvme_fabrics 00:37:47.184 rmmod nvme_keyring 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 565440 ']' 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 565440 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 565440 ']' 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 565440 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 565440 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 565440' 00:37:47.184 killing process with pid 565440 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 565440 00:37:47.184 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 565440 00:37:47.443 [2024-12-13 05:53:47.243750] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:47.443 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:47.443 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:47.443 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:47.443 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:47.443 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:47.443 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:47.443 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:47.443 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:47.443 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:47.443 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.443 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:47.443 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.346 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:49.346 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:49.346 00:37:49.346 real 0m12.381s 00:37:49.346 user 0m18.487s 00:37:49.346 sys 0m6.228s 00:37:49.346 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:49.346 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:49.346 ************************************ 00:37:49.346 END TEST nvmf_host_management 00:37:49.346 ************************************ 00:37:49.604 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:49.604 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:49.604 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:49.604 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:49.604 ************************************ 00:37:49.604 START TEST nvmf_lvol 00:37:49.604 ************************************ 00:37:49.604 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:49.604 * Looking for test storage... 00:37:49.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:49.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.605 --rc genhtml_branch_coverage=1 00:37:49.605 --rc genhtml_function_coverage=1 00:37:49.605 --rc genhtml_legend=1 00:37:49.605 --rc geninfo_all_blocks=1 00:37:49.605 --rc geninfo_unexecuted_blocks=1 00:37:49.605 00:37:49.605 ' 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:49.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.605 --rc genhtml_branch_coverage=1 00:37:49.605 --rc genhtml_function_coverage=1 00:37:49.605 --rc genhtml_legend=1 00:37:49.605 --rc geninfo_all_blocks=1 00:37:49.605 --rc geninfo_unexecuted_blocks=1 00:37:49.605 00:37:49.605 ' 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:49.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.605 --rc genhtml_branch_coverage=1 00:37:49.605 --rc genhtml_function_coverage=1 00:37:49.605 --rc genhtml_legend=1 00:37:49.605 --rc geninfo_all_blocks=1 00:37:49.605 --rc geninfo_unexecuted_blocks=1 00:37:49.605 00:37:49.605 ' 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:49.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.605 --rc genhtml_branch_coverage=1 00:37:49.605 --rc genhtml_function_coverage=1 00:37:49.605 --rc genhtml_legend=1 00:37:49.605 --rc geninfo_all_blocks=1 00:37:49.605 --rc geninfo_unexecuted_blocks=1 00:37:49.605 00:37:49.605 ' 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:49.605 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:49.864 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:49.865 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:56.434 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:56.434 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:56.435 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:56.435 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:56.435 Found net devices under 0000:af:00.0: cvl_0_0 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:56.435 Found net devices under 0000:af:00.1: cvl_0_1 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:56.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:56.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:37:56.435 00:37:56.435 --- 10.0.0.2 ping statistics --- 00:37:56.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.435 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:56.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:56.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:37:56.435 00:37:56.435 --- 10.0.0.1 ping statistics --- 00:37:56.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.435 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:56.435 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=569601 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 569601 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 569601 ']' 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:56.436 [2024-12-13 05:53:55.548477] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:56.436 [2024-12-13 05:53:55.549405] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:56.436 [2024-12-13 05:53:55.549436] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:56.436 [2024-12-13 05:53:55.626833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:56.436 [2024-12-13 05:53:55.649050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:56.436 [2024-12-13 05:53:55.649083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:56.436 [2024-12-13 05:53:55.649090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:56.436 [2024-12-13 05:53:55.649095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:56.436 [2024-12-13 05:53:55.649100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:56.436 [2024-12-13 05:53:55.650340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:56.436 [2024-12-13 05:53:55.650455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:56.436 [2024-12-13 05:53:55.650475] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:56.436 [2024-12-13 05:53:55.712626] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:56.436 [2024-12-13 05:53:55.713518] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:56.436 [2024-12-13 05:53:55.714043] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:56.436 [2024-12-13 05:53:55.714127] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:56.436 [2024-12-13 05:53:55.947212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:56.436 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:56.436 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:56.436 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:56.436 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:56.436 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:56.695 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:56.955 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ba8c6e99-19c4-474e-9fd6-20f7487c2cf1 00:37:56.955 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ba8c6e99-19c4-474e-9fd6-20f7487c2cf1 lvol 20 00:37:57.213 05:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2ef7fa51-afd3-4e08-9e16-d3458b9c11e8 00:37:57.213 05:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:57.214 05:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2ef7fa51-afd3-4e08-9e16-d3458b9c11e8 00:37:57.472 05:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:57.731 [2024-12-13 05:53:57.551125] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:57.731 05:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:57.990 05:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=570018 00:37:57.990 05:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:57.990 05:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:58.925 05:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2ef7fa51-afd3-4e08-9e16-d3458b9c11e8 MY_SNAPSHOT 00:37:59.183 05:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=840c97ff-6f1a-47d4-a4ee-bcc8a8c9d032 00:37:59.183 05:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2ef7fa51-afd3-4e08-9e16-d3458b9c11e8 30 00:37:59.441 05:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 840c97ff-6f1a-47d4-a4ee-bcc8a8c9d032 MY_CLONE 00:37:59.699 05:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=30bafa32-497a-4375-864c-bd6c3ff49e23 00:37:59.699 05:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 30bafa32-497a-4375-864c-bd6c3ff49e23 00:38:00.266 05:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 570018 00:38:08.373 Initializing NVMe Controllers 00:38:08.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:08.373 Controller IO queue size 128, less than required. 00:38:08.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:08.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:08.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:08.373 Initialization complete. Launching workers. 00:38:08.373 ======================================================== 00:38:08.373 Latency(us) 00:38:08.373 Device Information : IOPS MiB/s Average min max 00:38:08.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12349.20 48.24 10366.17 1578.89 59334.82 00:38:08.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12202.50 47.67 10489.36 3500.52 65248.79 00:38:08.373 ======================================================== 00:38:08.373 Total : 24551.70 95.91 10427.40 1578.89 65248.79 00:38:08.373 00:38:08.373 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:08.373 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2ef7fa51-afd3-4e08-9e16-d3458b9c11e8 00:38:08.631 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba8c6e99-19c4-474e-9fd6-20f7487c2cf1 00:38:08.631 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:08.631 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:08.631 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:08.631 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:08.631 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:08.631 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:08.631 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:08.631 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:08.631 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:08.631 rmmod nvme_tcp 00:38:08.631 rmmod nvme_fabrics 00:38:08.889 rmmod nvme_keyring 00:38:08.889 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:08.889 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:08.889 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:08.889 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 569601 ']' 00:38:08.889 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 569601 00:38:08.889 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 569601 ']' 00:38:08.889 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 569601 00:38:08.890 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:08.890 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:08.890 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 569601 00:38:08.890 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:08.890 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:08.890 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 569601' 00:38:08.890 killing process with pid 569601 00:38:08.890 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 569601 00:38:08.890 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 569601 00:38:09.148 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:09.148 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:09.148 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:09.148 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:09.149 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:09.149 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:09.149 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:09.149 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:09.149 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:09.149 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:09.149 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:09.149 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:11.052 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:11.052 00:38:11.052 real 0m21.588s 00:38:11.052 user 0m55.394s 00:38:11.052 sys 0m9.383s 00:38:11.052 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:11.052 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:11.052 ************************************ 00:38:11.052 END TEST nvmf_lvol 00:38:11.052 ************************************ 00:38:11.052 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:11.052 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:11.052 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:11.052 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:11.310 ************************************ 00:38:11.310 START TEST nvmf_lvs_grow 00:38:11.310 ************************************ 00:38:11.310 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:11.310 * Looking for test storage... 00:38:11.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:11.310 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:11.310 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:38:11.310 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:11.310 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:11.310 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:11.310 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:11.310 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:11.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:11.311 --rc genhtml_branch_coverage=1 00:38:11.311 --rc genhtml_function_coverage=1 00:38:11.311 --rc genhtml_legend=1 00:38:11.311 --rc geninfo_all_blocks=1 00:38:11.311 --rc geninfo_unexecuted_blocks=1 00:38:11.311 00:38:11.311 ' 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:11.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:11.311 --rc genhtml_branch_coverage=1 00:38:11.311 --rc genhtml_function_coverage=1 00:38:11.311 --rc genhtml_legend=1 00:38:11.311 --rc geninfo_all_blocks=1 00:38:11.311 --rc geninfo_unexecuted_blocks=1 00:38:11.311 00:38:11.311 ' 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:11.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:11.311 --rc genhtml_branch_coverage=1 00:38:11.311 --rc genhtml_function_coverage=1 00:38:11.311 --rc genhtml_legend=1 00:38:11.311 --rc geninfo_all_blocks=1 00:38:11.311 --rc geninfo_unexecuted_blocks=1 00:38:11.311 00:38:11.311 ' 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:11.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:11.311 --rc genhtml_branch_coverage=1 00:38:11.311 --rc genhtml_function_coverage=1 00:38:11.311 --rc genhtml_legend=1 00:38:11.311 --rc geninfo_all_blocks=1 00:38:11.311 --rc geninfo_unexecuted_blocks=1 00:38:11.311 00:38:11.311 ' 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:11.311 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:11.312 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:11.312 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:11.312 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:11.312 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:11.312 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:11.312 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:11.312 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:11.312 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:11.312 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:11.312 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:11.312 05:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:17.880 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:17.880 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:17.880 Found net devices under 0000:af:00.0: cvl_0_0 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:17.880 Found net devices under 0000:af:00.1: cvl_0_1 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:17.880 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:17.880 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:17.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:17.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:38:17.881 00:38:17.881 --- 10.0.0.2 ping statistics --- 00:38:17.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:17.881 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:17.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:17.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:38:17.881 00:38:17.881 --- 10.0.0.1 ping statistics --- 00:38:17.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:17.881 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=575618 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 575618 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 575618 ']' 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:17.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:17.881 [2024-12-13 05:54:17.247835] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:17.881 [2024-12-13 05:54:17.248739] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:17.881 [2024-12-13 05:54:17.248777] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:17.881 [2024-12-13 05:54:17.323397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.881 [2024-12-13 05:54:17.344786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:17.881 [2024-12-13 05:54:17.344819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:17.881 [2024-12-13 05:54:17.344825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:17.881 [2024-12-13 05:54:17.344831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:17.881 [2024-12-13 05:54:17.344837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:17.881 [2024-12-13 05:54:17.345334] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:17.881 [2024-12-13 05:54:17.407209] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:17.881 [2024-12-13 05:54:17.407409] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:17.881 [2024-12-13 05:54:17.645976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:17.881 ************************************ 00:38:17.881 START TEST lvs_grow_clean 00:38:17.881 ************************************ 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:17.881 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:18.140 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:18.140 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:18.140 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=62884891-e0e3-436f-bb11-a116de931d8b 00:38:18.140 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62884891-e0e3-436f-bb11-a116de931d8b 00:38:18.140 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:18.399 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:18.399 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:18.399 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 62884891-e0e3-436f-bb11-a116de931d8b lvol 150 00:38:18.658 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=facb79e5-f83b-44b9-b415-e5fa140f9783 00:38:18.658 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:18.658 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:18.917 [2024-12-13 05:54:18.717741] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:18.917 [2024-12-13 05:54:18.717869] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:18.917 true 00:38:18.917 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62884891-e0e3-436f-bb11-a116de931d8b 00:38:18.917 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:19.176 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:19.176 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:19.176 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 facb79e5-f83b-44b9-b415-e5fa140f9783 00:38:19.435 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:19.694 [2024-12-13 05:54:19.494148] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:19.694 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:19.953 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=576104 00:38:19.953 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:19.953 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:19.953 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 576104 /var/tmp/bdevperf.sock 00:38:19.953 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 576104 ']' 00:38:19.953 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:19.953 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:19.953 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:19.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:19.953 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:19.953 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:19.953 [2024-12-13 05:54:19.766779] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:19.953 [2024-12-13 05:54:19.766826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576104 ] 00:38:19.953 [2024-12-13 05:54:19.842497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.953 [2024-12-13 05:54:19.864654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:19.953 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:19.953 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:19.953 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:20.212 Nvme0n1 00:38:20.212 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:20.471 [ 00:38:20.471 { 00:38:20.471 "name": "Nvme0n1", 00:38:20.471 "aliases": [ 00:38:20.471 "facb79e5-f83b-44b9-b415-e5fa140f9783" 00:38:20.471 ], 00:38:20.471 "product_name": "NVMe disk", 00:38:20.471 "block_size": 4096, 00:38:20.471 "num_blocks": 38912, 00:38:20.471 "uuid": "facb79e5-f83b-44b9-b415-e5fa140f9783", 00:38:20.471 "numa_id": 1, 00:38:20.471 "assigned_rate_limits": { 00:38:20.471 "rw_ios_per_sec": 0, 00:38:20.471 "rw_mbytes_per_sec": 0, 00:38:20.471 "r_mbytes_per_sec": 0, 00:38:20.471 "w_mbytes_per_sec": 0 00:38:20.471 }, 00:38:20.471 "claimed": false, 00:38:20.471 "zoned": false, 00:38:20.471 "supported_io_types": { 00:38:20.471 "read": true, 00:38:20.471 "write": true, 00:38:20.471 "unmap": true, 00:38:20.471 "flush": true, 00:38:20.471 "reset": true, 00:38:20.471 "nvme_admin": true, 00:38:20.471 "nvme_io": true, 00:38:20.471 "nvme_io_md": false, 00:38:20.471 "write_zeroes": true, 00:38:20.471 "zcopy": false, 00:38:20.471 "get_zone_info": false, 00:38:20.471 "zone_management": false, 00:38:20.471 "zone_append": false, 00:38:20.471 "compare": true, 00:38:20.471 "compare_and_write": true, 00:38:20.471 "abort": true, 00:38:20.471 "seek_hole": false, 00:38:20.471 "seek_data": false, 00:38:20.471 "copy": true, 00:38:20.471 "nvme_iov_md": false 00:38:20.471 }, 00:38:20.471 "memory_domains": [ 00:38:20.471 { 00:38:20.471 "dma_device_id": "system", 00:38:20.471 "dma_device_type": 1 00:38:20.471 } 00:38:20.471 ], 00:38:20.471 "driver_specific": { 00:38:20.471 "nvme": [ 00:38:20.471 { 00:38:20.471 "trid": { 00:38:20.471 "trtype": "TCP", 00:38:20.471 "adrfam": "IPv4", 00:38:20.471 "traddr": "10.0.0.2", 00:38:20.471 "trsvcid": "4420", 00:38:20.471 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:20.471 }, 00:38:20.471 "ctrlr_data": { 00:38:20.471 "cntlid": 1, 00:38:20.471 "vendor_id": "0x8086", 00:38:20.471 "model_number": "SPDK bdev Controller", 00:38:20.471 "serial_number": "SPDK0", 00:38:20.471 "firmware_revision": "25.01", 00:38:20.471 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:20.471 "oacs": { 00:38:20.471 "security": 0, 00:38:20.471 "format": 0, 00:38:20.471 "firmware": 0, 00:38:20.471 "ns_manage": 0 00:38:20.471 }, 00:38:20.471 "multi_ctrlr": true, 00:38:20.471 "ana_reporting": false 00:38:20.471 }, 00:38:20.471 "vs": { 00:38:20.471 "nvme_version": "1.3" 00:38:20.471 }, 00:38:20.471 "ns_data": { 00:38:20.471 "id": 1, 00:38:20.471 "can_share": true 00:38:20.471 } 00:38:20.471 } 00:38:20.471 ], 00:38:20.471 "mp_policy": "active_passive" 00:38:20.471 } 00:38:20.471 } 00:38:20.471 ] 00:38:20.471 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=576111 00:38:20.471 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:20.471 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:20.471 Running I/O for 10 seconds... 00:38:21.848 Latency(us) 00:38:21.848 [2024-12-13T04:54:21.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:21.848 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:38:21.848 [2024-12-13T04:54:21.863Z] =================================================================================================================== 00:38:21.848 [2024-12-13T04:54:21.863Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:38:21.848 00:38:22.416 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 62884891-e0e3-436f-bb11-a116de931d8b 00:38:22.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.675 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:22.675 [2024-12-13T04:54:22.690Z] =================================================================================================================== 00:38:22.675 [2024-12-13T04:54:22.690Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:22.675 00:38:22.675 true 00:38:22.675 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62884891-e0e3-436f-bb11-a116de931d8b 00:38:22.675 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:22.934 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:22.934 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:22.934 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 576111 00:38:23.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:23.501 Nvme0n1 : 3.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:38:23.501 [2024-12-13T04:54:23.516Z] =================================================================================================================== 00:38:23.501 [2024-12-13T04:54:23.516Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:38:23.501 00:38:24.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:24.879 Nvme0n1 : 4.00 23336.25 91.16 0.00 0.00 0.00 0.00 0.00 00:38:24.879 [2024-12-13T04:54:24.894Z] =================================================================================================================== 00:38:24.879 [2024-12-13T04:54:24.894Z] Total : 23336.25 91.16 0.00 0.00 0.00 0.00 0.00 00:38:24.879 00:38:25.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:25.814 Nvme0n1 : 5.00 23418.80 91.48 0.00 0.00 0.00 0.00 0.00 00:38:25.814 [2024-12-13T04:54:25.829Z] =================================================================================================================== 00:38:25.814 [2024-12-13T04:54:25.829Z] Total : 23418.80 91.48 0.00 0.00 0.00 0.00 0.00 00:38:25.814 00:38:26.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:26.750 Nvme0n1 : 6.00 23473.83 91.69 0.00 0.00 0.00 0.00 0.00 00:38:26.750 [2024-12-13T04:54:26.765Z] =================================================================================================================== 00:38:26.750 [2024-12-13T04:54:26.765Z] Total : 23473.83 91.69 0.00 0.00 0.00 0.00 0.00 00:38:26.750 00:38:27.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:27.686 Nvme0n1 : 7.00 23513.14 91.85 0.00 0.00 0.00 0.00 0.00 00:38:27.686 [2024-12-13T04:54:27.701Z] =================================================================================================================== 00:38:27.686 [2024-12-13T04:54:27.701Z] Total : 23513.14 91.85 0.00 0.00 0.00 0.00 0.00 00:38:27.686 00:38:28.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:28.622 Nvme0n1 : 8.00 23542.62 91.96 0.00 0.00 0.00 0.00 0.00 00:38:28.622 [2024-12-13T04:54:28.637Z] =================================================================================================================== 00:38:28.622 [2024-12-13T04:54:28.637Z] Total : 23542.62 91.96 0.00 0.00 0.00 0.00 0.00 00:38:28.622 00:38:29.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:29.558 Nvme0n1 : 9.00 23565.56 92.05 0.00 0.00 0.00 0.00 0.00 00:38:29.558 [2024-12-13T04:54:29.573Z] =================================================================================================================== 00:38:29.558 [2024-12-13T04:54:29.573Z] Total : 23565.56 92.05 0.00 0.00 0.00 0.00 0.00 00:38:29.558 00:38:30.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:30.493 Nvme0n1 : 10.00 23558.50 92.03 0.00 0.00 0.00 0.00 0.00 00:38:30.493 [2024-12-13T04:54:30.508Z] =================================================================================================================== 00:38:30.493 [2024-12-13T04:54:30.508Z] Total : 23558.50 92.03 0.00 0.00 0.00 0.00 0.00 00:38:30.493 00:38:30.753 00:38:30.753 Latency(us) 00:38:30.753 [2024-12-13T04:54:30.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:30.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:30.753 Nvme0n1 : 10.01 23559.18 92.03 0.00 0.00 5430.01 4868.39 27213.04 00:38:30.753 [2024-12-13T04:54:30.768Z] =================================================================================================================== 00:38:30.753 [2024-12-13T04:54:30.768Z] Total : 23559.18 92.03 0.00 0.00 5430.01 4868.39 27213.04 00:38:30.753 { 00:38:30.753 "results": [ 00:38:30.753 { 00:38:30.753 "job": "Nvme0n1", 00:38:30.753 "core_mask": "0x2", 00:38:30.753 "workload": "randwrite", 00:38:30.753 "status": "finished", 00:38:30.753 "queue_depth": 128, 00:38:30.753 "io_size": 4096, 00:38:30.753 "runtime": 10.005143, 00:38:30.753 "iops": 23559.18351191982, 00:38:30.753 "mibps": 92.02806059343679, 00:38:30.753 "io_failed": 0, 00:38:30.753 "io_timeout": 0, 00:38:30.753 "avg_latency_us": 5430.010529172584, 00:38:30.753 "min_latency_us": 4868.388571428572, 00:38:30.753 "max_latency_us": 27213.04380952381 00:38:30.753 } 00:38:30.753 ], 00:38:30.753 "core_count": 1 00:38:30.753 } 00:38:30.753 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 576104 00:38:30.753 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 576104 ']' 00:38:30.753 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 576104 00:38:30.753 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:30.753 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:30.753 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 576104 00:38:30.753 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:30.753 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:30.753 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 576104' 00:38:30.753 killing process with pid 576104 00:38:30.753 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 576104 00:38:30.753 Received shutdown signal, test time was about 10.000000 seconds 00:38:30.753 00:38:30.753 Latency(us) 00:38:30.753 [2024-12-13T04:54:30.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:30.753 [2024-12-13T04:54:30.768Z] =================================================================================================================== 00:38:30.753 [2024-12-13T04:54:30.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:30.753 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 576104 00:38:30.753 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:31.012 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:31.271 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62884891-e0e3-436f-bb11-a116de931d8b 00:38:31.271 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:31.529 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:31.529 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:31.529 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:31.529 [2024-12-13 05:54:31.501779] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:31.529 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62884891-e0e3-436f-bb11-a116de931d8b 00:38:31.530 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:31.530 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62884891-e0e3-436f-bb11-a116de931d8b 00:38:31.530 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:31.530 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:31.530 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:31.530 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:31.530 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:31.530 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:31.530 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:31.530 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:31.530 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62884891-e0e3-436f-bb11-a116de931d8b 00:38:31.788 request: 00:38:31.788 { 00:38:31.788 "uuid": "62884891-e0e3-436f-bb11-a116de931d8b", 00:38:31.788 "method": "bdev_lvol_get_lvstores", 00:38:31.788 "req_id": 1 00:38:31.788 } 00:38:31.788 Got JSON-RPC error response 00:38:31.788 response: 00:38:31.788 { 00:38:31.788 "code": -19, 00:38:31.788 "message": "No such device" 00:38:31.788 } 00:38:31.788 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:31.788 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:31.788 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:31.788 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:31.789 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:32.047 aio_bdev 00:38:32.047 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev facb79e5-f83b-44b9-b415-e5fa140f9783 00:38:32.047 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=facb79e5-f83b-44b9-b415-e5fa140f9783 00:38:32.047 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:32.047 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:32.047 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:32.047 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:32.047 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:32.306 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b facb79e5-f83b-44b9-b415-e5fa140f9783 -t 2000 00:38:32.306 [ 00:38:32.306 { 00:38:32.306 "name": "facb79e5-f83b-44b9-b415-e5fa140f9783", 00:38:32.306 "aliases": [ 00:38:32.306 "lvs/lvol" 00:38:32.306 ], 00:38:32.306 "product_name": "Logical Volume", 00:38:32.306 "block_size": 4096, 00:38:32.306 "num_blocks": 38912, 00:38:32.306 "uuid": "facb79e5-f83b-44b9-b415-e5fa140f9783", 00:38:32.306 "assigned_rate_limits": { 00:38:32.306 "rw_ios_per_sec": 0, 00:38:32.306 "rw_mbytes_per_sec": 0, 00:38:32.306 "r_mbytes_per_sec": 0, 00:38:32.306 "w_mbytes_per_sec": 0 00:38:32.306 }, 00:38:32.306 "claimed": false, 00:38:32.306 "zoned": false, 00:38:32.306 "supported_io_types": { 00:38:32.306 "read": true, 00:38:32.306 "write": true, 00:38:32.306 "unmap": true, 00:38:32.306 "flush": false, 00:38:32.306 "reset": true, 00:38:32.306 "nvme_admin": false, 00:38:32.306 "nvme_io": false, 00:38:32.306 "nvme_io_md": false, 00:38:32.306 "write_zeroes": true, 00:38:32.306 "zcopy": false, 00:38:32.306 "get_zone_info": false, 00:38:32.306 "zone_management": false, 00:38:32.306 "zone_append": false, 00:38:32.306 "compare": false, 00:38:32.306 "compare_and_write": false, 00:38:32.306 "abort": false, 00:38:32.306 "seek_hole": true, 00:38:32.306 "seek_data": true, 00:38:32.306 "copy": false, 00:38:32.306 "nvme_iov_md": false 00:38:32.306 }, 00:38:32.306 "driver_specific": { 00:38:32.306 "lvol": { 00:38:32.306 "lvol_store_uuid": "62884891-e0e3-436f-bb11-a116de931d8b", 00:38:32.306 "base_bdev": "aio_bdev", 00:38:32.306 "thin_provision": false, 00:38:32.306 "num_allocated_clusters": 38, 00:38:32.306 "snapshot": false, 00:38:32.306 "clone": false, 00:38:32.306 "esnap_clone": false 00:38:32.306 } 00:38:32.306 } 00:38:32.306 } 00:38:32.306 ] 00:38:32.306 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:32.306 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62884891-e0e3-436f-bb11-a116de931d8b 00:38:32.306 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:32.565 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:32.565 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62884891-e0e3-436f-bb11-a116de931d8b 00:38:32.565 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:32.824 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:32.824 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete facb79e5-f83b-44b9-b415-e5fa140f9783 00:38:33.083 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 62884891-e0e3-436f-bb11-a116de931d8b 00:38:33.083 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:33.342 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:33.342 00:38:33.342 real 0m15.555s 00:38:33.342 user 0m15.113s 00:38:33.342 sys 0m1.465s 00:38:33.342 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:33.342 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:33.342 ************************************ 00:38:33.342 END TEST lvs_grow_clean 00:38:33.342 ************************************ 00:38:33.342 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:33.342 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:33.342 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:33.342 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:33.342 ************************************ 00:38:33.342 START TEST lvs_grow_dirty 00:38:33.342 ************************************ 00:38:33.342 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:33.342 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:33.342 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:33.342 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:33.342 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:33.343 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:33.343 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:33.343 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:33.343 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:33.343 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:33.601 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:33.601 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:33.860 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=81875338-2f6a-4299-8270-57463916fd88 00:38:33.860 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81875338-2f6a-4299-8270-57463916fd88 00:38:33.860 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:34.119 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:34.119 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:34.119 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 81875338-2f6a-4299-8270-57463916fd88 lvol 150 00:38:34.378 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=044c2577-2787-48ba-a287-02abcf6ec7cc 00:38:34.378 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:34.378 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:34.378 [2024-12-13 05:54:34.317719] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:34.378 [2024-12-13 05:54:34.317839] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:34.378 true 00:38:34.378 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81875338-2f6a-4299-8270-57463916fd88 00:38:34.378 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:34.636 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:34.636 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:34.895 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 044c2577-2787-48ba-a287-02abcf6ec7cc 00:38:34.895 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:35.154 [2024-12-13 05:54:35.030124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:35.154 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:35.412 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=578486 00:38:35.412 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:35.412 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:35.412 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 578486 /var/tmp/bdevperf.sock 00:38:35.412 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 578486 ']' 00:38:35.412 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:35.412 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:35.412 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:35.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:35.412 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:35.412 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:35.412 [2024-12-13 05:54:35.286033] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:35.412 [2024-12-13 05:54:35.286081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid578486 ] 00:38:35.412 [2024-12-13 05:54:35.360738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.412 [2024-12-13 05:54:35.383160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:35.670 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:35.671 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:35.671 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:35.929 Nvme0n1 00:38:35.929 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:35.929 [ 00:38:35.929 { 00:38:35.929 "name": "Nvme0n1", 00:38:35.929 "aliases": [ 00:38:35.929 "044c2577-2787-48ba-a287-02abcf6ec7cc" 00:38:35.929 ], 00:38:35.929 "product_name": "NVMe disk", 00:38:35.929 "block_size": 4096, 00:38:35.929 "num_blocks": 38912, 00:38:35.929 "uuid": "044c2577-2787-48ba-a287-02abcf6ec7cc", 00:38:35.929 "numa_id": 1, 00:38:35.929 "assigned_rate_limits": { 00:38:35.929 "rw_ios_per_sec": 0, 00:38:35.929 "rw_mbytes_per_sec": 0, 00:38:35.929 "r_mbytes_per_sec": 0, 00:38:35.929 "w_mbytes_per_sec": 0 00:38:35.929 }, 00:38:35.929 "claimed": false, 00:38:35.929 "zoned": false, 00:38:35.929 "supported_io_types": { 00:38:35.929 "read": true, 00:38:35.929 "write": true, 00:38:35.929 "unmap": true, 00:38:35.929 "flush": true, 00:38:35.929 "reset": true, 00:38:35.929 "nvme_admin": true, 00:38:35.929 "nvme_io": true, 00:38:35.929 "nvme_io_md": false, 00:38:35.929 "write_zeroes": true, 00:38:35.929 "zcopy": false, 00:38:35.929 "get_zone_info": false, 00:38:35.929 "zone_management": false, 00:38:35.929 "zone_append": false, 00:38:35.929 "compare": true, 00:38:35.929 "compare_and_write": true, 00:38:35.929 "abort": true, 00:38:35.929 "seek_hole": false, 00:38:35.929 "seek_data": false, 00:38:35.929 "copy": true, 00:38:35.929 "nvme_iov_md": false 00:38:35.929 }, 00:38:35.929 "memory_domains": [ 00:38:35.929 { 00:38:35.929 "dma_device_id": "system", 00:38:35.929 "dma_device_type": 1 00:38:35.929 } 00:38:35.929 ], 00:38:35.929 "driver_specific": { 00:38:35.929 "nvme": [ 00:38:35.929 { 00:38:35.929 "trid": { 00:38:35.929 "trtype": "TCP", 00:38:35.929 "adrfam": "IPv4", 00:38:35.929 "traddr": "10.0.0.2", 00:38:35.929 "trsvcid": "4420", 00:38:35.929 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:35.930 }, 00:38:35.930 "ctrlr_data": { 00:38:35.930 "cntlid": 1, 00:38:35.930 "vendor_id": "0x8086", 00:38:35.930 "model_number": "SPDK bdev Controller", 00:38:35.930 "serial_number": "SPDK0", 00:38:35.930 "firmware_revision": "25.01", 00:38:35.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:35.930 "oacs": { 00:38:35.930 "security": 0, 00:38:35.930 "format": 0, 00:38:35.930 "firmware": 0, 00:38:35.930 "ns_manage": 0 00:38:35.930 }, 00:38:35.930 "multi_ctrlr": true, 00:38:35.930 "ana_reporting": false 00:38:35.930 }, 00:38:35.930 "vs": { 00:38:35.930 "nvme_version": "1.3" 00:38:35.930 }, 00:38:35.930 "ns_data": { 00:38:35.930 "id": 1, 00:38:35.930 "can_share": true 00:38:35.930 } 00:38:35.930 } 00:38:35.930 ], 00:38:35.930 "mp_policy": "active_passive" 00:38:35.930 } 00:38:35.930 } 00:38:35.930 ] 00:38:35.930 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=578617 00:38:35.930 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:35.930 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:36.188 Running I/O for 10 seconds... 00:38:37.124 Latency(us) 00:38:37.124 [2024-12-13T04:54:37.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.124 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:37.124 Nvme0n1 : 1.00 22924.00 89.55 0.00 0.00 0.00 0.00 0.00 00:38:37.124 [2024-12-13T04:54:37.139Z] =================================================================================================================== 00:38:37.124 [2024-12-13T04:54:37.139Z] Total : 22924.00 89.55 0.00 0.00 0.00 0.00 0.00 00:38:37.124 00:38:38.060 05:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 81875338-2f6a-4299-8270-57463916fd88 00:38:38.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:38.060 Nvme0n1 : 2.00 23209.50 90.66 0.00 0.00 0.00 0.00 0.00 00:38:38.060 [2024-12-13T04:54:38.075Z] =================================================================================================================== 00:38:38.060 [2024-12-13T04:54:38.075Z] Total : 23209.50 90.66 0.00 0.00 0.00 0.00 0.00 00:38:38.060 00:38:38.319 true 00:38:38.319 05:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81875338-2f6a-4299-8270-57463916fd88 00:38:38.319 05:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:38.319 05:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:38.319 05:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:38.319 05:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 578617 00:38:39.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:39.255 Nvme0n1 : 3.00 23304.67 91.03 0.00 0.00 0.00 0.00 0.00 00:38:39.255 [2024-12-13T04:54:39.270Z] =================================================================================================================== 00:38:39.255 [2024-12-13T04:54:39.270Z] Total : 23304.67 91.03 0.00 0.00 0.00 0.00 0.00 00:38:39.255 00:38:40.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:40.190 Nvme0n1 : 4.00 23384.00 91.34 0.00 0.00 0.00 0.00 0.00 00:38:40.190 [2024-12-13T04:54:40.205Z] =================================================================================================================== 00:38:40.190 [2024-12-13T04:54:40.205Z] Total : 23384.00 91.34 0.00 0.00 0.00 0.00 0.00 00:38:40.190 00:38:41.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:41.125 Nvme0n1 : 5.00 23380.80 91.33 0.00 0.00 0.00 0.00 0.00 00:38:41.125 [2024-12-13T04:54:41.140Z] =================================================================================================================== 00:38:41.125 [2024-12-13T04:54:41.140Z] Total : 23380.80 91.33 0.00 0.00 0.00 0.00 0.00 00:38:41.125 00:38:42.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:42.061 Nvme0n1 : 6.00 23442.17 91.57 0.00 0.00 0.00 0.00 0.00 00:38:42.061 [2024-12-13T04:54:42.076Z] =================================================================================================================== 00:38:42.061 [2024-12-13T04:54:42.076Z] Total : 23442.17 91.57 0.00 0.00 0.00 0.00 0.00 00:38:42.061 00:38:42.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:42.999 Nvme0n1 : 7.00 23486.00 91.74 0.00 0.00 0.00 0.00 0.00 00:38:42.999 [2024-12-13T04:54:43.014Z] =================================================================================================================== 00:38:42.999 [2024-12-13T04:54:43.014Z] Total : 23486.00 91.74 0.00 0.00 0.00 0.00 0.00 00:38:42.999 00:38:44.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:44.376 Nvme0n1 : 8.00 23526.88 91.90 0.00 0.00 0.00 0.00 0.00 00:38:44.376 [2024-12-13T04:54:44.391Z] =================================================================================================================== 00:38:44.376 [2024-12-13T04:54:44.391Z] Total : 23526.88 91.90 0.00 0.00 0.00 0.00 0.00 00:38:44.376 00:38:45.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:45.317 Nvme0n1 : 9.00 23557.00 92.02 0.00 0.00 0.00 0.00 0.00 00:38:45.317 [2024-12-13T04:54:45.332Z] =================================================================================================================== 00:38:45.317 [2024-12-13T04:54:45.332Z] Total : 23557.00 92.02 0.00 0.00 0.00 0.00 0.00 00:38:45.317 00:38:46.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:46.256 Nvme0n1 : 10.00 23581.10 92.11 0.00 0.00 0.00 0.00 0.00 00:38:46.256 [2024-12-13T04:54:46.271Z] =================================================================================================================== 00:38:46.256 [2024-12-13T04:54:46.271Z] Total : 23581.10 92.11 0.00 0.00 0.00 0.00 0.00 00:38:46.256 00:38:46.256 00:38:46.256 Latency(us) 00:38:46.256 [2024-12-13T04:54:46.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:46.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:46.256 Nvme0n1 : 10.00 23581.19 92.11 0.00 0.00 5424.73 3120.76 25715.08 00:38:46.256 [2024-12-13T04:54:46.271Z] =================================================================================================================== 00:38:46.256 [2024-12-13T04:54:46.271Z] Total : 23581.19 92.11 0.00 0.00 5424.73 3120.76 25715.08 00:38:46.256 { 00:38:46.256 "results": [ 00:38:46.256 { 00:38:46.256 "job": "Nvme0n1", 00:38:46.256 "core_mask": "0x2", 00:38:46.256 "workload": "randwrite", 00:38:46.256 "status": "finished", 00:38:46.256 "queue_depth": 128, 00:38:46.256 "io_size": 4096, 00:38:46.256 "runtime": 10.002675, 00:38:46.256 "iops": 23581.19203113167, 00:38:46.256 "mibps": 92.11403137160809, 00:38:46.256 "io_failed": 0, 00:38:46.256 "io_timeout": 0, 00:38:46.256 "avg_latency_us": 5424.734618768011, 00:38:46.256 "min_latency_us": 3120.7619047619046, 00:38:46.256 "max_latency_us": 25715.078095238096 00:38:46.256 } 00:38:46.256 ], 00:38:46.256 "core_count": 1 00:38:46.256 } 00:38:46.256 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 578486 00:38:46.256 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 578486 ']' 00:38:46.256 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 578486 00:38:46.256 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:46.256 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:46.256 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 578486 00:38:46.256 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:46.256 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:46.256 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 578486' 00:38:46.256 killing process with pid 578486 00:38:46.256 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 578486 00:38:46.256 Received shutdown signal, test time was about 10.000000 seconds 00:38:46.256 00:38:46.256 Latency(us) 00:38:46.256 [2024-12-13T04:54:46.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:46.256 [2024-12-13T04:54:46.271Z] =================================================================================================================== 00:38:46.256 [2024-12-13T04:54:46.271Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:46.256 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 578486 00:38:46.256 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:46.515 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:46.773 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81875338-2f6a-4299-8270-57463916fd88 00:38:46.774 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 575618 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 575618 00:38:47.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 575618 Killed "${NVMF_APP[@]}" "$@" 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=580350 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 580350 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 580350 ']' 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:47.033 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:47.033 [2024-12-13 05:54:46.959829] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:47.033 [2024-12-13 05:54:46.960737] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:47.033 [2024-12-13 05:54:46.960773] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:47.033 [2024-12-13 05:54:47.038918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.292 [2024-12-13 05:54:47.060469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:47.292 [2024-12-13 05:54:47.060500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:47.292 [2024-12-13 05:54:47.060507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:47.292 [2024-12-13 05:54:47.060512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:47.292 [2024-12-13 05:54:47.060517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:47.292 [2024-12-13 05:54:47.061012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.292 [2024-12-13 05:54:47.122813] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:47.292 [2024-12-13 05:54:47.123008] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:47.292 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:47.292 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:47.292 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:47.292 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:47.292 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:47.292 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:47.292 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:47.551 [2024-12-13 05:54:47.362412] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:47.551 [2024-12-13 05:54:47.362625] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:47.551 [2024-12-13 05:54:47.362709] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:47.551 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:47.551 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 044c2577-2787-48ba-a287-02abcf6ec7cc 00:38:47.551 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=044c2577-2787-48ba-a287-02abcf6ec7cc 00:38:47.551 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:47.551 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:47.551 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:47.551 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:47.551 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:47.810 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 044c2577-2787-48ba-a287-02abcf6ec7cc -t 2000 00:38:47.810 [ 00:38:47.810 { 00:38:47.810 "name": "044c2577-2787-48ba-a287-02abcf6ec7cc", 00:38:47.810 "aliases": [ 00:38:47.810 "lvs/lvol" 00:38:47.810 ], 00:38:47.810 "product_name": "Logical Volume", 00:38:47.810 "block_size": 4096, 00:38:47.810 "num_blocks": 38912, 00:38:47.810 "uuid": "044c2577-2787-48ba-a287-02abcf6ec7cc", 00:38:47.810 "assigned_rate_limits": { 00:38:47.810 "rw_ios_per_sec": 0, 00:38:47.810 "rw_mbytes_per_sec": 0, 00:38:47.810 "r_mbytes_per_sec": 0, 00:38:47.810 "w_mbytes_per_sec": 0 00:38:47.810 }, 00:38:47.810 "claimed": false, 00:38:47.810 "zoned": false, 00:38:47.810 "supported_io_types": { 00:38:47.810 "read": true, 00:38:47.810 "write": true, 00:38:47.810 "unmap": true, 00:38:47.810 "flush": false, 00:38:47.810 "reset": true, 00:38:47.810 "nvme_admin": false, 00:38:47.810 "nvme_io": false, 00:38:47.810 "nvme_io_md": false, 00:38:47.810 "write_zeroes": true, 00:38:47.810 "zcopy": false, 00:38:47.810 "get_zone_info": false, 00:38:47.810 "zone_management": false, 00:38:47.810 "zone_append": false, 00:38:47.810 "compare": false, 00:38:47.810 "compare_and_write": false, 00:38:47.810 "abort": false, 00:38:47.810 "seek_hole": true, 00:38:47.810 "seek_data": true, 00:38:47.810 "copy": false, 00:38:47.810 "nvme_iov_md": false 00:38:47.810 }, 00:38:47.810 "driver_specific": { 00:38:47.810 "lvol": { 00:38:47.810 "lvol_store_uuid": "81875338-2f6a-4299-8270-57463916fd88", 00:38:47.810 "base_bdev": "aio_bdev", 00:38:47.810 "thin_provision": false, 00:38:47.810 "num_allocated_clusters": 38, 00:38:47.810 "snapshot": false, 00:38:47.810 "clone": false, 00:38:47.810 "esnap_clone": false 00:38:47.810 } 00:38:47.810 } 00:38:47.810 } 00:38:47.810 ] 00:38:47.810 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:47.810 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81875338-2f6a-4299-8270-57463916fd88 00:38:47.810 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:48.069 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:48.069 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81875338-2f6a-4299-8270-57463916fd88 00:38:48.069 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:48.328 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:48.328 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:48.587 [2024-12-13 05:54:48.345494] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:48.587 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81875338-2f6a-4299-8270-57463916fd88 00:38:48.587 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:48.587 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81875338-2f6a-4299-8270-57463916fd88 00:38:48.587 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:48.587 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:48.587 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:48.587 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:48.587 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:48.587 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:48.587 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:48.587 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:48.587 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81875338-2f6a-4299-8270-57463916fd88 00:38:48.587 request: 00:38:48.587 { 00:38:48.587 "uuid": "81875338-2f6a-4299-8270-57463916fd88", 00:38:48.587 "method": "bdev_lvol_get_lvstores", 00:38:48.587 "req_id": 1 00:38:48.587 } 00:38:48.587 Got JSON-RPC error response 00:38:48.587 response: 00:38:48.587 { 00:38:48.587 "code": -19, 00:38:48.587 "message": "No such device" 00:38:48.587 } 00:38:48.846 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:48.846 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:48.846 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:48.846 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:48.846 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:48.846 aio_bdev 00:38:48.846 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 044c2577-2787-48ba-a287-02abcf6ec7cc 00:38:48.846 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=044c2577-2787-48ba-a287-02abcf6ec7cc 00:38:48.846 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:48.846 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:48.846 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:48.846 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:48.846 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:49.105 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 044c2577-2787-48ba-a287-02abcf6ec7cc -t 2000 00:38:49.364 [ 00:38:49.364 { 00:38:49.364 "name": "044c2577-2787-48ba-a287-02abcf6ec7cc", 00:38:49.364 "aliases": [ 00:38:49.364 "lvs/lvol" 00:38:49.364 ], 00:38:49.364 "product_name": "Logical Volume", 00:38:49.364 "block_size": 4096, 00:38:49.364 "num_blocks": 38912, 00:38:49.364 "uuid": "044c2577-2787-48ba-a287-02abcf6ec7cc", 00:38:49.364 "assigned_rate_limits": { 00:38:49.364 "rw_ios_per_sec": 0, 00:38:49.364 "rw_mbytes_per_sec": 0, 00:38:49.364 "r_mbytes_per_sec": 0, 00:38:49.364 "w_mbytes_per_sec": 0 00:38:49.364 }, 00:38:49.364 "claimed": false, 00:38:49.364 "zoned": false, 00:38:49.364 "supported_io_types": { 00:38:49.364 "read": true, 00:38:49.364 "write": true, 00:38:49.364 "unmap": true, 00:38:49.364 "flush": false, 00:38:49.364 "reset": true, 00:38:49.364 "nvme_admin": false, 00:38:49.364 "nvme_io": false, 00:38:49.364 "nvme_io_md": false, 00:38:49.364 "write_zeroes": true, 00:38:49.364 "zcopy": false, 00:38:49.364 "get_zone_info": false, 00:38:49.364 "zone_management": false, 00:38:49.364 "zone_append": false, 00:38:49.364 "compare": false, 00:38:49.364 "compare_and_write": false, 00:38:49.364 "abort": false, 00:38:49.364 "seek_hole": true, 00:38:49.364 "seek_data": true, 00:38:49.364 "copy": false, 00:38:49.364 "nvme_iov_md": false 00:38:49.364 }, 00:38:49.364 "driver_specific": { 00:38:49.364 "lvol": { 00:38:49.364 "lvol_store_uuid": "81875338-2f6a-4299-8270-57463916fd88", 00:38:49.364 "base_bdev": "aio_bdev", 00:38:49.364 "thin_provision": false, 00:38:49.364 "num_allocated_clusters": 38, 00:38:49.364 "snapshot": false, 00:38:49.364 "clone": false, 00:38:49.364 "esnap_clone": false 00:38:49.364 } 00:38:49.364 } 00:38:49.364 } 00:38:49.364 ] 00:38:49.364 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:49.364 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81875338-2f6a-4299-8270-57463916fd88 00:38:49.364 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:49.365 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:49.365 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81875338-2f6a-4299-8270-57463916fd88 00:38:49.365 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:49.624 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:49.624 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 044c2577-2787-48ba-a287-02abcf6ec7cc 00:38:49.882 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 81875338-2f6a-4299-8270-57463916fd88 00:38:50.141 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:50.400 00:38:50.400 real 0m16.875s 00:38:50.400 user 0m34.232s 00:38:50.400 sys 0m3.853s 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:50.400 ************************************ 00:38:50.400 END TEST lvs_grow_dirty 00:38:50.400 ************************************ 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:50.400 nvmf_trace.0 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:50.400 rmmod nvme_tcp 00:38:50.400 rmmod nvme_fabrics 00:38:50.400 rmmod nvme_keyring 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 580350 ']' 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 580350 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 580350 ']' 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 580350 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 580350 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 580350' 00:38:50.400 killing process with pid 580350 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 580350 00:38:50.400 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 580350 00:38:50.660 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:50.660 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:50.660 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:50.660 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:50.660 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:50.660 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:50.660 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:50.660 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:50.660 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:50.660 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.660 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.660 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.659 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:52.659 00:38:52.659 real 0m41.551s 00:38:52.659 user 0m51.763s 00:38:52.659 sys 0m10.224s 00:38:52.659 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.659 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:52.659 ************************************ 00:38:52.659 END TEST nvmf_lvs_grow 00:38:52.659 ************************************ 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:52.937 ************************************ 00:38:52.937 START TEST nvmf_bdev_io_wait 00:38:52.937 ************************************ 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:52.937 * Looking for test storage... 00:38:52.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:52.937 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:52.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.938 --rc genhtml_branch_coverage=1 00:38:52.938 --rc genhtml_function_coverage=1 00:38:52.938 --rc genhtml_legend=1 00:38:52.938 --rc geninfo_all_blocks=1 00:38:52.938 --rc geninfo_unexecuted_blocks=1 00:38:52.938 00:38:52.938 ' 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:52.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.938 --rc genhtml_branch_coverage=1 00:38:52.938 --rc genhtml_function_coverage=1 00:38:52.938 --rc genhtml_legend=1 00:38:52.938 --rc geninfo_all_blocks=1 00:38:52.938 --rc geninfo_unexecuted_blocks=1 00:38:52.938 00:38:52.938 ' 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:52.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.938 --rc genhtml_branch_coverage=1 00:38:52.938 --rc genhtml_function_coverage=1 00:38:52.938 --rc genhtml_legend=1 00:38:52.938 --rc geninfo_all_blocks=1 00:38:52.938 --rc geninfo_unexecuted_blocks=1 00:38:52.938 00:38:52.938 ' 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:52.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.938 --rc genhtml_branch_coverage=1 00:38:52.938 --rc genhtml_function_coverage=1 00:38:52.938 --rc genhtml_legend=1 00:38:52.938 --rc geninfo_all_blocks=1 00:38:52.938 --rc geninfo_unexecuted_blocks=1 00:38:52.938 00:38:52.938 ' 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.938 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:52.939 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:59.542 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:59.542 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:59.542 Found net devices under 0000:af:00.0: cvl_0_0 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:59.542 Found net devices under 0000:af:00.1: cvl_0_1 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.542 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:59.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:59.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:38:59.543 00:38:59.543 --- 10.0.0.2 ping statistics --- 00:38:59.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.543 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:59.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:59.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:38:59.543 00:38:59.543 --- 10.0.0.1 ping statistics --- 00:38:59.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.543 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=584383 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 584383 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 584383 ']' 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:59.543 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:59.543 [2024-12-13 05:54:58.872516] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:59.543 [2024-12-13 05:54:58.873424] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:59.543 [2024-12-13 05:54:58.873463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:59.543 [2024-12-13 05:54:58.948207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:59.543 [2024-12-13 05:54:58.972006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:59.543 [2024-12-13 05:54:58.972042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:59.543 [2024-12-13 05:54:58.972049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:59.543 [2024-12-13 05:54:58.972054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:59.543 [2024-12-13 05:54:58.972059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:59.543 [2024-12-13 05:54:58.973500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:59.543 [2024-12-13 05:54:58.973613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:59.543 [2024-12-13 05:54:58.973719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.543 [2024-12-13 05:54:58.973720] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:59.543 [2024-12-13 05:54:58.974058] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:59.543 [2024-12-13 05:54:59.114827] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:59.543 [2024-12-13 05:54:59.115205] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:59.543 [2024-12-13 05:54:59.115336] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:59.543 [2024-12-13 05:54:59.115487] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:59.543 [2024-12-13 05:54:59.126436] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:59.543 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:59.544 Malloc0 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:59.544 [2024-12-13 05:54:59.202716] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=584405 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=584407 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:59.544 { 00:38:59.544 "params": { 00:38:59.544 "name": "Nvme$subsystem", 00:38:59.544 "trtype": "$TEST_TRANSPORT", 00:38:59.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.544 "adrfam": "ipv4", 00:38:59.544 "trsvcid": "$NVMF_PORT", 00:38:59.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.544 "hdgst": ${hdgst:-false}, 00:38:59.544 "ddgst": ${ddgst:-false} 00:38:59.544 }, 00:38:59.544 "method": "bdev_nvme_attach_controller" 00:38:59.544 } 00:38:59.544 EOF 00:38:59.544 )") 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=584409 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=584412 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:59.544 { 00:38:59.544 "params": { 00:38:59.544 "name": "Nvme$subsystem", 00:38:59.544 "trtype": "$TEST_TRANSPORT", 00:38:59.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.544 "adrfam": "ipv4", 00:38:59.544 "trsvcid": "$NVMF_PORT", 00:38:59.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.544 "hdgst": ${hdgst:-false}, 00:38:59.544 "ddgst": ${ddgst:-false} 00:38:59.544 }, 00:38:59.544 "method": "bdev_nvme_attach_controller" 00:38:59.544 } 00:38:59.544 EOF 00:38:59.544 )") 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:59.544 { 00:38:59.544 "params": { 00:38:59.544 "name": "Nvme$subsystem", 00:38:59.544 "trtype": "$TEST_TRANSPORT", 00:38:59.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.544 "adrfam": "ipv4", 00:38:59.544 "trsvcid": "$NVMF_PORT", 00:38:59.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.544 "hdgst": ${hdgst:-false}, 00:38:59.544 "ddgst": ${ddgst:-false} 00:38:59.544 }, 00:38:59.544 "method": "bdev_nvme_attach_controller" 00:38:59.544 } 00:38:59.544 EOF 00:38:59.544 )") 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:59.544 { 00:38:59.544 "params": { 00:38:59.544 "name": "Nvme$subsystem", 00:38:59.544 "trtype": "$TEST_TRANSPORT", 00:38:59.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.544 "adrfam": "ipv4", 00:38:59.544 "trsvcid": "$NVMF_PORT", 00:38:59.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.544 "hdgst": ${hdgst:-false}, 00:38:59.544 "ddgst": ${ddgst:-false} 00:38:59.544 }, 00:38:59.544 "method": "bdev_nvme_attach_controller" 00:38:59.544 } 00:38:59.544 EOF 00:38:59.544 )") 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 584405 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:59.544 "params": { 00:38:59.544 "name": "Nvme1", 00:38:59.544 "trtype": "tcp", 00:38:59.544 "traddr": "10.0.0.2", 00:38:59.544 "adrfam": "ipv4", 00:38:59.544 "trsvcid": "4420", 00:38:59.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:59.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:59.544 "hdgst": false, 00:38:59.544 "ddgst": false 00:38:59.544 }, 00:38:59.544 "method": "bdev_nvme_attach_controller" 00:38:59.544 }' 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:59.544 "params": { 00:38:59.544 "name": "Nvme1", 00:38:59.544 "trtype": "tcp", 00:38:59.544 "traddr": "10.0.0.2", 00:38:59.544 "adrfam": "ipv4", 00:38:59.544 "trsvcid": "4420", 00:38:59.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:59.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:59.544 "hdgst": false, 00:38:59.544 "ddgst": false 00:38:59.544 }, 00:38:59.544 "method": "bdev_nvme_attach_controller" 00:38:59.544 }' 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:59.544 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:59.544 "params": { 00:38:59.544 "name": "Nvme1", 00:38:59.544 "trtype": "tcp", 00:38:59.544 "traddr": "10.0.0.2", 00:38:59.544 "adrfam": "ipv4", 00:38:59.544 "trsvcid": "4420", 00:38:59.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:59.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:59.544 "hdgst": false, 00:38:59.544 "ddgst": false 00:38:59.544 }, 00:38:59.544 "method": "bdev_nvme_attach_controller" 00:38:59.544 }' 00:38:59.545 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:59.545 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:59.545 "params": { 00:38:59.545 "name": "Nvme1", 00:38:59.545 "trtype": "tcp", 00:38:59.545 "traddr": "10.0.0.2", 00:38:59.545 "adrfam": "ipv4", 00:38:59.545 "trsvcid": "4420", 00:38:59.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:59.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:59.545 "hdgst": false, 00:38:59.545 "ddgst": false 00:38:59.545 }, 00:38:59.545 "method": "bdev_nvme_attach_controller" 00:38:59.545 }' 00:38:59.545 [2024-12-13 05:54:59.254581] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:59.545 [2024-12-13 05:54:59.254632] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:59.545 [2024-12-13 05:54:59.254715] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:59.545 [2024-12-13 05:54:59.254754] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:59.545 [2024-12-13 05:54:59.257099] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:59.545 [2024-12-13 05:54:59.257106] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:59.545 [2024-12-13 05:54:59.257145] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-12-13 05:54:59.257147] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:38:59.545 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:59.545 [2024-12-13 05:54:59.445102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.545 [2024-12-13 05:54:59.464301] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:59.545 [2024-12-13 05:54:59.495613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.545 [2024-12-13 05:54:59.511068] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:38:59.545 [2024-12-13 05:54:59.533948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.545 [2024-12-13 05:54:59.548572] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:38:59.802 [2024-12-13 05:54:59.647650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.802 [2024-12-13 05:54:59.669291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:38:59.802 Running I/O for 1 seconds... 00:38:59.802 Running I/O for 1 seconds... 00:39:00.059 Running I/O for 1 seconds... 00:39:00.059 Running I/O for 1 seconds... 00:39:00.991 7908.00 IOPS, 30.89 MiB/s 00:39:00.991 Latency(us) 00:39:00.991 [2024-12-13T04:55:01.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.991 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:00.991 Nvme1n1 : 1.02 7908.57 30.89 0.00 0.00 16053.74 3900.95 26588.89 00:39:00.991 [2024-12-13T04:55:01.006Z] =================================================================================================================== 00:39:00.991 [2024-12-13T04:55:01.006Z] Total : 7908.57 30.89 0.00 0.00 16053.74 3900.95 26588.89 00:39:00.991 7612.00 IOPS, 29.73 MiB/s 00:39:00.991 Latency(us) 00:39:00.991 [2024-12-13T04:55:01.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.991 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:00.991 Nvme1n1 : 1.01 7732.84 30.21 0.00 0.00 16513.29 3838.54 29085.50 00:39:00.991 [2024-12-13T04:55:01.006Z] =================================================================================================================== 00:39:00.991 [2024-12-13T04:55:01.006Z] Total : 7732.84 30.21 0.00 0.00 16513.29 3838.54 29085.50 00:39:00.991 236400.00 IOPS, 923.44 MiB/s 00:39:00.991 Latency(us) 00:39:00.991 [2024-12-13T04:55:01.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.991 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:00.991 Nvme1n1 : 1.00 236026.11 921.98 0.00 0.00 539.55 224.30 1583.79 00:39:00.991 [2024-12-13T04:55:01.006Z] =================================================================================================================== 00:39:00.991 [2024-12-13T04:55:01.006Z] Total : 236026.11 921.98 0.00 0.00 539.55 224.30 1583.79 00:39:00.991 11711.00 IOPS, 45.75 MiB/s 00:39:00.991 Latency(us) 00:39:00.991 [2024-12-13T04:55:01.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.991 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:00.991 Nvme1n1 : 1.01 11771.34 45.98 0.00 0.00 10841.30 4150.61 14605.17 00:39:00.991 [2024-12-13T04:55:01.006Z] =================================================================================================================== 00:39:00.991 [2024-12-13T04:55:01.006Z] Total : 11771.34 45.98 0.00 0.00 10841.30 4150.61 14605.17 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 584407 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 584409 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 584412 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:00.991 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:00.991 rmmod nvme_tcp 00:39:00.991 rmmod nvme_fabrics 00:39:01.249 rmmod nvme_keyring 00:39:01.249 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:01.249 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:01.249 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:01.249 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 584383 ']' 00:39:01.249 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 584383 00:39:01.249 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 584383 ']' 00:39:01.249 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 584383 00:39:01.249 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:01.249 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:01.249 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 584383 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 584383' 00:39:01.250 killing process with pid 584383 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 584383 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 584383 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:01.250 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:03.785 00:39:03.785 real 0m10.627s 00:39:03.785 user 0m14.315s 00:39:03.785 sys 0m6.402s 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:03.785 ************************************ 00:39:03.785 END TEST nvmf_bdev_io_wait 00:39:03.785 ************************************ 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:03.785 ************************************ 00:39:03.785 START TEST nvmf_queue_depth 00:39:03.785 ************************************ 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:03.785 * Looking for test storage... 00:39:03.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:03.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:03.785 --rc genhtml_branch_coverage=1 00:39:03.785 --rc genhtml_function_coverage=1 00:39:03.785 --rc genhtml_legend=1 00:39:03.785 --rc geninfo_all_blocks=1 00:39:03.785 --rc geninfo_unexecuted_blocks=1 00:39:03.785 00:39:03.785 ' 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:03.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:03.785 --rc genhtml_branch_coverage=1 00:39:03.785 --rc genhtml_function_coverage=1 00:39:03.785 --rc genhtml_legend=1 00:39:03.785 --rc geninfo_all_blocks=1 00:39:03.785 --rc geninfo_unexecuted_blocks=1 00:39:03.785 00:39:03.785 ' 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:03.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:03.785 --rc genhtml_branch_coverage=1 00:39:03.785 --rc genhtml_function_coverage=1 00:39:03.785 --rc genhtml_legend=1 00:39:03.785 --rc geninfo_all_blocks=1 00:39:03.785 --rc geninfo_unexecuted_blocks=1 00:39:03.785 00:39:03.785 ' 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:03.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:03.785 --rc genhtml_branch_coverage=1 00:39:03.785 --rc genhtml_function_coverage=1 00:39:03.785 --rc genhtml_legend=1 00:39:03.785 --rc geninfo_all_blocks=1 00:39:03.785 --rc geninfo_unexecuted_blocks=1 00:39:03.785 00:39:03.785 ' 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:03.785 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:03.786 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:10.361 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:10.362 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:10.362 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:10.362 Found net devices under 0000:af:00.0: cvl_0_0 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:10.362 Found net devices under 0000:af:00.1: cvl_0_1 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:10.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:10.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:39:10.362 00:39:10.362 --- 10.0.0.2 ping statistics --- 00:39:10.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.362 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:10.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:10.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:39:10.362 00:39:10.362 --- 10.0.0.1 ping statistics --- 00:39:10.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.362 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=588120 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 588120 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 588120 ']' 00:39:10.362 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:10.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:10.363 [2024-12-13 05:55:09.546315] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:10.363 [2024-12-13 05:55:09.547161] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:10.363 [2024-12-13 05:55:09.547192] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:10.363 [2024-12-13 05:55:09.626909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.363 [2024-12-13 05:55:09.647937] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:10.363 [2024-12-13 05:55:09.647975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:10.363 [2024-12-13 05:55:09.647983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:10.363 [2024-12-13 05:55:09.647988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:10.363 [2024-12-13 05:55:09.647993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:10.363 [2024-12-13 05:55:09.648474] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:10.363 [2024-12-13 05:55:09.709897] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:10.363 [2024-12-13 05:55:09.710090] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:10.363 [2024-12-13 05:55:09.777129] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:10.363 Malloc0 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:10.363 [2024-12-13 05:55:09.853265] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=588139 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 588139 /var/tmp/bdevperf.sock 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 588139 ']' 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:10.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:10.363 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:10.363 [2024-12-13 05:55:09.905464] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:10.363 [2024-12-13 05:55:09.905504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588139 ] 00:39:10.363 [2024-12-13 05:55:09.981320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.363 [2024-12-13 05:55:10.004209] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:10.363 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:10.363 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:10.363 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:10.363 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.363 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:10.363 NVMe0n1 00:39:10.363 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.363 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:10.363 Running I/O for 10 seconds... 00:39:12.676 11876.00 IOPS, 46.39 MiB/s [2024-12-13T04:55:13.625Z] 12249.00 IOPS, 47.85 MiB/s [2024-12-13T04:55:14.559Z] 12290.33 IOPS, 48.01 MiB/s [2024-12-13T04:55:15.491Z] 12326.50 IOPS, 48.15 MiB/s [2024-12-13T04:55:16.425Z] 12479.00 IOPS, 48.75 MiB/s [2024-12-13T04:55:17.358Z] 12463.17 IOPS, 48.68 MiB/s [2024-12-13T04:55:18.291Z] 12558.86 IOPS, 49.06 MiB/s [2024-12-13T04:55:19.664Z] 12544.75 IOPS, 49.00 MiB/s [2024-12-13T04:55:20.599Z] 12561.78 IOPS, 49.07 MiB/s [2024-12-13T04:55:20.599Z] 12580.00 IOPS, 49.14 MiB/s 00:39:20.584 Latency(us) 00:39:20.584 [2024-12-13T04:55:20.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:20.584 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:20.584 Verification LBA range: start 0x0 length 0x4000 00:39:20.584 NVMe0n1 : 10.06 12600.42 49.22 0.00 0.00 80992.99 19348.72 53177.78 00:39:20.584 [2024-12-13T04:55:20.599Z] =================================================================================================================== 00:39:20.584 [2024-12-13T04:55:20.599Z] Total : 12600.42 49.22 0.00 0.00 80992.99 19348.72 53177.78 00:39:20.584 { 00:39:20.584 "results": [ 00:39:20.584 { 00:39:20.584 "job": "NVMe0n1", 00:39:20.584 "core_mask": "0x1", 00:39:20.584 "workload": "verify", 00:39:20.584 "status": "finished", 00:39:20.584 "verify_range": { 00:39:20.584 "start": 0, 00:39:20.584 "length": 16384 00:39:20.584 }, 00:39:20.584 "queue_depth": 1024, 00:39:20.584 "io_size": 4096, 00:39:20.584 "runtime": 10.062762, 00:39:20.584 "iops": 12600.417261185348, 00:39:20.584 "mibps": 49.22037992650527, 00:39:20.584 "io_failed": 0, 00:39:20.584 "io_timeout": 0, 00:39:20.584 "avg_latency_us": 80992.98960717619, 00:39:20.584 "min_latency_us": 19348.72380952381, 00:39:20.584 "max_latency_us": 53177.782857142854 00:39:20.584 } 00:39:20.584 ], 00:39:20.584 "core_count": 1 00:39:20.584 } 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 588139 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 588139 ']' 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 588139 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588139 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588139' 00:39:20.584 killing process with pid 588139 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 588139 00:39:20.584 Received shutdown signal, test time was about 10.000000 seconds 00:39:20.584 00:39:20.584 Latency(us) 00:39:20.584 [2024-12-13T04:55:20.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:20.584 [2024-12-13T04:55:20.599Z] =================================================================================================================== 00:39:20.584 [2024-12-13T04:55:20.599Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 588139 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:20.584 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:20.584 rmmod nvme_tcp 00:39:20.842 rmmod nvme_fabrics 00:39:20.842 rmmod nvme_keyring 00:39:20.842 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:20.842 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:20.843 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:20.843 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 588120 ']' 00:39:20.843 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 588120 00:39:20.843 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 588120 ']' 00:39:20.843 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 588120 00:39:20.843 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:20.843 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:20.843 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588120 00:39:20.843 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:20.843 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:20.843 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588120' 00:39:20.843 killing process with pid 588120 00:39:20.843 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 588120 00:39:20.843 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 588120 00:39:21.101 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:21.101 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:21.101 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:21.101 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:21.101 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:21.101 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:21.101 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:21.101 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:21.101 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:21.101 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:21.101 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:21.101 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:23.007 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:23.007 00:39:23.007 real 0m19.526s 00:39:23.007 user 0m22.488s 00:39:23.007 sys 0m6.236s 00:39:23.007 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:23.007 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:23.007 ************************************ 00:39:23.007 END TEST nvmf_queue_depth 00:39:23.007 ************************************ 00:39:23.007 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:23.007 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:23.007 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:23.007 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:23.007 ************************************ 00:39:23.007 START TEST nvmf_target_multipath 00:39:23.007 ************************************ 00:39:23.007 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:23.266 * Looking for test storage... 00:39:23.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:23.266 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:23.266 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:39:23.266 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:23.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.267 --rc genhtml_branch_coverage=1 00:39:23.267 --rc genhtml_function_coverage=1 00:39:23.267 --rc genhtml_legend=1 00:39:23.267 --rc geninfo_all_blocks=1 00:39:23.267 --rc geninfo_unexecuted_blocks=1 00:39:23.267 00:39:23.267 ' 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:23.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.267 --rc genhtml_branch_coverage=1 00:39:23.267 --rc genhtml_function_coverage=1 00:39:23.267 --rc genhtml_legend=1 00:39:23.267 --rc geninfo_all_blocks=1 00:39:23.267 --rc geninfo_unexecuted_blocks=1 00:39:23.267 00:39:23.267 ' 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:23.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.267 --rc genhtml_branch_coverage=1 00:39:23.267 --rc genhtml_function_coverage=1 00:39:23.267 --rc genhtml_legend=1 00:39:23.267 --rc geninfo_all_blocks=1 00:39:23.267 --rc geninfo_unexecuted_blocks=1 00:39:23.267 00:39:23.267 ' 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:23.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.267 --rc genhtml_branch_coverage=1 00:39:23.267 --rc genhtml_function_coverage=1 00:39:23.267 --rc genhtml_legend=1 00:39:23.267 --rc geninfo_all_blocks=1 00:39:23.267 --rc geninfo_unexecuted_blocks=1 00:39:23.267 00:39:23.267 ' 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:23.267 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:23.268 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:29.836 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:29.836 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:29.836 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:29.836 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:29.836 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:29.836 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:29.836 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:29.836 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:29.836 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:29.837 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:29.837 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:29.837 Found net devices under 0000:af:00.0: cvl_0_0 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:29.837 Found net devices under 0000:af:00.1: cvl_0_1 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:29.837 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:29.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:29.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:39:29.837 00:39:29.837 --- 10.0.0.2 ping statistics --- 00:39:29.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.837 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:39:29.837 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:29.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:29.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:39:29.837 00:39:29.837 --- 10.0.0.1 ping statistics --- 00:39:29.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.838 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:29.838 only one NIC for nvmf test 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:29.838 rmmod nvme_tcp 00:39:29.838 rmmod nvme_fabrics 00:39:29.838 rmmod nvme_keyring 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:29.838 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:31.215 00:39:31.215 real 0m8.208s 00:39:31.215 user 0m1.824s 00:39:31.215 sys 0m4.393s 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:31.215 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:31.215 ************************************ 00:39:31.215 END TEST nvmf_target_multipath 00:39:31.215 ************************************ 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:31.474 ************************************ 00:39:31.474 START TEST nvmf_zcopy 00:39:31.474 ************************************ 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:31.474 * Looking for test storage... 00:39:31.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:31.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.474 --rc genhtml_branch_coverage=1 00:39:31.474 --rc genhtml_function_coverage=1 00:39:31.474 --rc genhtml_legend=1 00:39:31.474 --rc geninfo_all_blocks=1 00:39:31.474 --rc geninfo_unexecuted_blocks=1 00:39:31.474 00:39:31.474 ' 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:31.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.474 --rc genhtml_branch_coverage=1 00:39:31.474 --rc genhtml_function_coverage=1 00:39:31.474 --rc genhtml_legend=1 00:39:31.474 --rc geninfo_all_blocks=1 00:39:31.474 --rc geninfo_unexecuted_blocks=1 00:39:31.474 00:39:31.474 ' 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:31.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.474 --rc genhtml_branch_coverage=1 00:39:31.474 --rc genhtml_function_coverage=1 00:39:31.474 --rc genhtml_legend=1 00:39:31.474 --rc geninfo_all_blocks=1 00:39:31.474 --rc geninfo_unexecuted_blocks=1 00:39:31.474 00:39:31.474 ' 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:31.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.474 --rc genhtml_branch_coverage=1 00:39:31.474 --rc genhtml_function_coverage=1 00:39:31.474 --rc genhtml_legend=1 00:39:31.474 --rc geninfo_all_blocks=1 00:39:31.474 --rc geninfo_unexecuted_blocks=1 00:39:31.474 00:39:31.474 ' 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:31.474 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:31.734 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:37.008 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:37.267 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:37.267 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:37.267 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:37.268 Found net devices under 0000:af:00.0: cvl_0_0 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:37.268 Found net devices under 0000:af:00.1: cvl_0_1 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:37.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:37.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:39:37.268 00:39:37.268 --- 10.0.0.2 ping statistics --- 00:39:37.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.268 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:39:37.268 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:37.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:37.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:39:37.527 00:39:37.527 --- 10.0.0.1 ping statistics --- 00:39:37.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.527 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=596622 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 596622 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 596622 ']' 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:37.527 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.527 [2024-12-13 05:55:37.389406] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:37.527 [2024-12-13 05:55:37.390264] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:37.527 [2024-12-13 05:55:37.390295] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:37.527 [2024-12-13 05:55:37.463931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.527 [2024-12-13 05:55:37.484753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:37.527 [2024-12-13 05:55:37.484786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:37.527 [2024-12-13 05:55:37.484793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:37.527 [2024-12-13 05:55:37.484799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:37.527 [2024-12-13 05:55:37.484804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:37.527 [2024-12-13 05:55:37.485268] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:37.786 [2024-12-13 05:55:37.546849] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:37.786 [2024-12-13 05:55:37.547045] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.786 [2024-12-13 05:55:37.617936] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.786 [2024-12-13 05:55:37.646179] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.786 malloc0 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.786 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:37.787 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.787 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.787 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.787 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:37.787 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:37.787 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:37.787 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:37.787 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:37.787 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:37.787 { 00:39:37.787 "params": { 00:39:37.787 "name": "Nvme$subsystem", 00:39:37.787 "trtype": "$TEST_TRANSPORT", 00:39:37.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:37.787 "adrfam": "ipv4", 00:39:37.787 "trsvcid": "$NVMF_PORT", 00:39:37.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:37.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:37.787 "hdgst": ${hdgst:-false}, 00:39:37.787 "ddgst": ${ddgst:-false} 00:39:37.787 }, 00:39:37.787 "method": "bdev_nvme_attach_controller" 00:39:37.787 } 00:39:37.787 EOF 00:39:37.787 )") 00:39:37.787 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:37.787 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:37.787 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:37.787 05:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:37.787 "params": { 00:39:37.787 "name": "Nvme1", 00:39:37.787 "trtype": "tcp", 00:39:37.787 "traddr": "10.0.0.2", 00:39:37.787 "adrfam": "ipv4", 00:39:37.787 "trsvcid": "4420", 00:39:37.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:37.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:37.787 "hdgst": false, 00:39:37.787 "ddgst": false 00:39:37.787 }, 00:39:37.787 "method": "bdev_nvme_attach_controller" 00:39:37.787 }' 00:39:37.787 [2024-12-13 05:55:37.741650] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:37.787 [2024-12-13 05:55:37.741691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596704 ] 00:39:38.045 [2024-12-13 05:55:37.817024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.045 [2024-12-13 05:55:37.839583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.303 Running I/O for 10 seconds... 00:39:40.168 8549.00 IOPS, 66.79 MiB/s [2024-12-13T04:55:41.558Z] 8611.00 IOPS, 67.27 MiB/s [2024-12-13T04:55:42.492Z] 8591.33 IOPS, 67.12 MiB/s [2024-12-13T04:55:43.426Z] 8619.25 IOPS, 67.34 MiB/s [2024-12-13T04:55:44.359Z] 8649.80 IOPS, 67.58 MiB/s [2024-12-13T04:55:45.303Z] 8658.83 IOPS, 67.65 MiB/s [2024-12-13T04:55:46.238Z] 8667.00 IOPS, 67.71 MiB/s [2024-12-13T04:55:47.612Z] 8675.25 IOPS, 67.78 MiB/s [2024-12-13T04:55:48.545Z] 8668.89 IOPS, 67.73 MiB/s [2024-12-13T04:55:48.545Z] 8675.70 IOPS, 67.78 MiB/s 00:39:48.530 Latency(us) 00:39:48.530 [2024-12-13T04:55:48.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:48.530 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:48.530 Verification LBA range: start 0x0 length 0x1000 00:39:48.530 Nvme1n1 : 10.05 8645.16 67.54 0.00 0.00 14710.53 2481.01 43940.33 00:39:48.530 [2024-12-13T04:55:48.545Z] =================================================================================================================== 00:39:48.530 [2024-12-13T04:55:48.545Z] Total : 8645.16 67.54 0.00 0.00 14710.53 2481.01 43940.33 00:39:48.530 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=598415 00:39:48.530 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:48.530 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:48.530 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:48.530 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:48.530 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:48.530 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:48.530 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:48.530 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:48.530 { 00:39:48.530 "params": { 00:39:48.530 "name": "Nvme$subsystem", 00:39:48.530 "trtype": "$TEST_TRANSPORT", 00:39:48.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:48.530 "adrfam": "ipv4", 00:39:48.530 "trsvcid": "$NVMF_PORT", 00:39:48.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:48.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:48.530 "hdgst": ${hdgst:-false}, 00:39:48.530 "ddgst": ${ddgst:-false} 00:39:48.530 }, 00:39:48.530 "method": "bdev_nvme_attach_controller" 00:39:48.530 } 00:39:48.530 EOF 00:39:48.530 )") 00:39:48.530 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:48.530 [2024-12-13 05:55:48.385616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.385651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:48.530 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:48.530 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:48.530 "params": { 00:39:48.530 "name": "Nvme1", 00:39:48.530 "trtype": "tcp", 00:39:48.530 "traddr": "10.0.0.2", 00:39:48.530 "adrfam": "ipv4", 00:39:48.530 "trsvcid": "4420", 00:39:48.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:48.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:48.530 "hdgst": false, 00:39:48.530 "ddgst": false 00:39:48.530 }, 00:39:48.530 "method": "bdev_nvme_attach_controller" 00:39:48.530 }' 00:39:48.530 [2024-12-13 05:55:48.393573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.393586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 [2024-12-13 05:55:48.401568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.401577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 [2024-12-13 05:55:48.409566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.409575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 [2024-12-13 05:55:48.417564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.417572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 [2024-12-13 05:55:48.422606] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:48.530 [2024-12-13 05:55:48.422652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598415 ] 00:39:48.530 [2024-12-13 05:55:48.429566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.429578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 [2024-12-13 05:55:48.441564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.441574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 [2024-12-13 05:55:48.453567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.453576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 [2024-12-13 05:55:48.465564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.465573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 [2024-12-13 05:55:48.477566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.477575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 [2024-12-13 05:55:48.489563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.489571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 [2024-12-13 05:55:48.494637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.530 [2024-12-13 05:55:48.501569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.501582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 [2024-12-13 05:55:48.513571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.513584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 [2024-12-13 05:55:48.515546] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:48.530 [2024-12-13 05:55:48.525572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.525585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.530 [2024-12-13 05:55:48.537579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.530 [2024-12-13 05:55:48.537598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.549585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.549602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.561570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.561583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.573568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.573581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.585567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.585578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.597580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.597600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.609571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.609584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.621572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.621586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.633569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.633582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.645570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.645584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.657574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.657591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 Running I/O for 5 seconds... 00:39:48.789 [2024-12-13 05:55:48.673667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.673686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.684661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.684680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.699510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.699528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.714633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.714651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.729540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.729559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.743864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.743882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.758369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.758386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.773393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.773412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.787330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.787353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:48.789 [2024-12-13 05:55:48.801682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:48.789 [2024-12-13 05:55:48.801701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.047 [2024-12-13 05:55:48.812756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.047 [2024-12-13 05:55:48.812773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.047 [2024-12-13 05:55:48.827212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.047 [2024-12-13 05:55:48.827231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.047 [2024-12-13 05:55:48.842016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.047 [2024-12-13 05:55:48.842037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.047 [2024-12-13 05:55:48.852978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.047 [2024-12-13 05:55:48.852996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.047 [2024-12-13 05:55:48.867572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.047 [2024-12-13 05:55:48.867589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.047 [2024-12-13 05:55:48.881878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.047 [2024-12-13 05:55:48.881896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.047 [2024-12-13 05:55:48.894590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.047 [2024-12-13 05:55:48.894607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.047 [2024-12-13 05:55:48.910010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.047 [2024-12-13 05:55:48.910027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.048 [2024-12-13 05:55:48.925556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.048 [2024-12-13 05:55:48.925575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.048 [2024-12-13 05:55:48.938877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.048 [2024-12-13 05:55:48.938895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.048 [2024-12-13 05:55:48.954284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.048 [2024-12-13 05:55:48.954304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.048 [2024-12-13 05:55:48.969816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.048 [2024-12-13 05:55:48.969835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.048 [2024-12-13 05:55:48.981331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.048 [2024-12-13 05:55:48.981350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.048 [2024-12-13 05:55:48.995743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.048 [2024-12-13 05:55:48.995761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.048 [2024-12-13 05:55:49.010564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.048 [2024-12-13 05:55:49.010582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.048 [2024-12-13 05:55:49.025334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.048 [2024-12-13 05:55:49.025353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.048 [2024-12-13 05:55:49.037576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.048 [2024-12-13 05:55:49.037594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.048 [2024-12-13 05:55:49.051638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.048 [2024-12-13 05:55:49.051657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.065867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.065885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.079794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.079812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.095391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.095410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.109488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.109512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.122371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.122389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.137865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.137884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.150580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.150601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.165824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.165843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.179545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.179565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.194102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.194121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.209569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.209589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.223769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.223787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.238138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.238157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.253326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.253344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.266591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.266610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.281753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.281771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.294458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.294476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.306 [2024-12-13 05:55:49.309390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.306 [2024-12-13 05:55:49.309408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.322232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.322250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.334809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.334827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.346030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.346048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.359372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.359390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.374022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.374048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.389223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.389242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.403391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.403411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.417698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.417716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.430536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.430554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.446086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.446104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.458238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.458257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.470835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.470854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.485546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.485564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.499654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.499672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.513931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.513949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.529637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.529656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.543072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.543090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.552821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.552838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.565 [2024-12-13 05:55:49.566790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.565 [2024-12-13 05:55:49.566809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.581538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.581556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.594059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.594077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.606678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.606696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.621653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.621671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.634242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.634263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.650138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.650156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.665618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.665636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 16880.00 IOPS, 131.88 MiB/s [2024-12-13T04:55:49.838Z] [2024-12-13 05:55:49.679566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.679584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.694146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.694164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.709513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.709532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.723788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.723806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.738304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.738322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.752957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.752975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.767353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.767371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.781522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.781541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.794231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.794249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.809076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.809093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.822548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.822566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.823 [2024-12-13 05:55:49.835054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.823 [2024-12-13 05:55:49.835072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.082 [2024-12-13 05:55:49.850043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.082 [2024-12-13 05:55:49.850061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.082 [2024-12-13 05:55:49.865603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.082 [2024-12-13 05:55:49.865621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.082 [2024-12-13 05:55:49.876662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.082 [2024-12-13 05:55:49.876681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.082 [2024-12-13 05:55:49.891372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.082 [2024-12-13 05:55:49.891390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.082 [2024-12-13 05:55:49.905684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.082 [2024-12-13 05:55:49.905702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.082 [2024-12-13 05:55:49.918234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.082 [2024-12-13 05:55:49.918252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.082 [2024-12-13 05:55:49.931115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.082 [2024-12-13 05:55:49.931133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.082 [2024-12-13 05:55:49.941461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.082 [2024-12-13 05:55:49.941480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.082 [2024-12-13 05:55:49.955052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.082 [2024-12-13 05:55:49.955069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.082 [2024-12-13 05:55:49.965869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.082 [2024-12-13 05:55:49.965886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.082 [2024-12-13 05:55:49.978661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.083 [2024-12-13 05:55:49.978678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.083 [2024-12-13 05:55:49.993152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.083 [2024-12-13 05:55:49.993170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.083 [2024-12-13 05:55:50.006383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.083 [2024-12-13 05:55:50.006403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.083 [2024-12-13 05:55:50.022273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.083 [2024-12-13 05:55:50.022293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.083 [2024-12-13 05:55:50.039758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.083 [2024-12-13 05:55:50.039778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.083 [2024-12-13 05:55:50.054628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.083 [2024-12-13 05:55:50.054646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.083 [2024-12-13 05:55:50.069913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.083 [2024-12-13 05:55:50.069931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.083 [2024-12-13 05:55:50.085550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.083 [2024-12-13 05:55:50.085569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.083 [2024-12-13 05:55:50.096564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.083 [2024-12-13 05:55:50.096582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.111032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.111050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.125719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.125737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.138487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.138505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.153306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.153323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.167381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.167399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.182139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.182158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.197676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.197694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.210222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.210240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.223634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.223651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.238263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.238280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.253387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.253405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.266866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.266884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.281282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.281300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.294146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.294164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.307287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.307304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.321812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.321830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.332353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.332371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.341 [2024-12-13 05:55:50.346868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.341 [2024-12-13 05:55:50.346887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.361271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.361290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.375791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.375809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.390303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.390321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.405273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.405292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.419611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.419629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.434343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.434361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.449485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.449505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.463930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.463947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.478889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.478906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.493979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.493996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.509235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.509254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.523642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.523660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.538215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.538233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.553629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.553647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.566075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.566092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.579129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.579149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.593858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.593876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.600 [2024-12-13 05:55:50.609429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.600 [2024-12-13 05:55:50.609455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.622958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.622978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.637873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.637891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.653651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.653669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.667838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.667857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 16873.50 IOPS, 131.82 MiB/s [2024-12-13T04:55:50.873Z] [2024-12-13 05:55:50.682742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.682760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.697446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.697479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.709639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.709658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.722932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.722950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.737368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.737388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.748642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.748661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.763341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.763359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.778205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.778224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.793076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.793095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.806340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.806359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.821459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.821478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.832986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.833005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.847347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.847364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.858 [2024-12-13 05:55:50.862069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.858 [2024-12-13 05:55:50.862087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.116 [2024-12-13 05:55:50.878492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.116 [2024-12-13 05:55:50.878512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.116 [2024-12-13 05:55:50.894217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.116 [2024-12-13 05:55:50.894235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.116 [2024-12-13 05:55:50.909674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.116 [2024-12-13 05:55:50.909693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.116 [2024-12-13 05:55:50.922191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.116 [2024-12-13 05:55:50.922209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.116 [2024-12-13 05:55:50.934832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:50.934849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:50.945978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:50.945995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:50.959711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:50.959733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:50.974500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:50.974517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:50.989073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:50.989092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:51.002406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:51.002426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:51.014967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:51.014987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:51.030163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:51.030180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:51.045949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:51.045970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:51.061735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:51.061755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:51.073131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:51.073149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:51.087323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:51.087341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:51.102096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:51.102114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:51.117133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:51.117151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.117 [2024-12-13 05:55:51.131558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.117 [2024-12-13 05:55:51.131575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.146399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.146417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.162215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.162232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.177390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.177408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.191482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.191500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.206875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.206894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.221724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.221741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.232066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.232087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.246802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.246820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.261436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.261458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.274656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.274673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.290005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.290022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.305380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.305398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.319800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.319818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.334609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.334627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.350289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.350307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.365387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.365405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.375 [2024-12-13 05:55:51.379751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.375 [2024-12-13 05:55:51.379769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.633 [2024-12-13 05:55:51.394728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.633 [2024-12-13 05:55:51.394746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.633 [2024-12-13 05:55:51.411148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.633 [2024-12-13 05:55:51.411166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.633 [2024-12-13 05:55:51.425588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.633 [2024-12-13 05:55:51.425607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.633 [2024-12-13 05:55:51.436582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.633 [2024-12-13 05:55:51.436600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.633 [2024-12-13 05:55:51.451424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.633 [2024-12-13 05:55:51.451442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.633 [2024-12-13 05:55:51.466326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.634 [2024-12-13 05:55:51.466344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.634 [2024-12-13 05:55:51.481760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.634 [2024-12-13 05:55:51.481777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.634 [2024-12-13 05:55:51.493855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.634 [2024-12-13 05:55:51.493873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.634 [2024-12-13 05:55:51.507158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.634 [2024-12-13 05:55:51.507176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.634 [2024-12-13 05:55:51.521869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.634 [2024-12-13 05:55:51.521886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.634 [2024-12-13 05:55:51.537015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.634 [2024-12-13 05:55:51.537033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.634 [2024-12-13 05:55:51.551515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.634 [2024-12-13 05:55:51.551533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.634 [2024-12-13 05:55:51.566086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.634 [2024-12-13 05:55:51.566103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.634 [2024-12-13 05:55:51.581395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.634 [2024-12-13 05:55:51.581413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.634 [2024-12-13 05:55:51.595547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.634 [2024-12-13 05:55:51.595565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.634 [2024-12-13 05:55:51.610197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.634 [2024-12-13 05:55:51.610215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.634 [2024-12-13 05:55:51.625604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.634 [2024-12-13 05:55:51.625622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.634 [2024-12-13 05:55:51.639150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.634 [2024-12-13 05:55:51.639167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.654153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.654170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.669610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.669628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 16823.00 IOPS, 131.43 MiB/s [2024-12-13T04:55:51.907Z] [2024-12-13 05:55:51.683033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.683050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.697861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.697878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.713485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.713504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.727940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.727958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.742334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.742352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.756946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.756964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.771663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.771682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.785890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.785908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.801579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.801597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.815034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.815051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.829551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.829569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.842353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.842371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.856990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.857008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.870370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.870388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.884922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.884941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.892 [2024-12-13 05:55:51.899531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.892 [2024-12-13 05:55:51.899549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:51.914497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:51.914515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:51.929157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:51.929176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:51.942985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:51.943003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:51.957687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:51.957704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:51.970401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:51.970418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:51.984934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:51.984952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:51.998320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:51.998337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:52.013017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:52.013035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:52.027463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:52.027483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:52.042185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:52.042202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:52.056904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:52.056923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:52.071155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:52.071174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:52.085854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:52.085872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:52.101788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:52.101807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:52.115160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:52.115179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:52.129541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:52.129560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:52.140332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:52.140350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.151 [2024-12-13 05:55:52.155130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.151 [2024-12-13 05:55:52.155148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.169619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.169638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.182312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.182330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.197429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.197456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.211564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.211582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.226002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.226019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.241399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.241418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.255226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.255245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.270033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.270053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.285529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.285548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.299899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.299917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.314573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.314597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.329558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.329577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.343118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.343137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.357962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.357980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.372676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.372696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.386851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.386870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.401220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.401239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.409 [2024-12-13 05:55:52.414692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.409 [2024-12-13 05:55:52.414711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.429577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.429596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.440509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.440528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.455573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.455591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.469925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.469943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.485260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.485279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.499239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.499258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.513874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.513891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.526589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.526608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.541480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.541498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.554479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.554496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.569358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.569375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.583977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.584001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.598351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.598368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.613444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.613467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.626067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.626085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.641245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.641263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.655419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.655437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.668 [2024-12-13 05:55:52.669944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.668 [2024-12-13 05:55:52.669961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.926 16879.00 IOPS, 131.87 MiB/s [2024-12-13T04:55:52.941Z] [2024-12-13 05:55:52.685400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.926 [2024-12-13 05:55:52.685418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.926 [2024-12-13 05:55:52.699695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.926 [2024-12-13 05:55:52.699713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.926 [2024-12-13 05:55:52.714280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.926 [2024-12-13 05:55:52.714296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.926 [2024-12-13 05:55:52.730143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.926 [2024-12-13 05:55:52.730160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.926 [2024-12-13 05:55:52.745991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.926 [2024-12-13 05:55:52.746009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.926 [2024-12-13 05:55:52.758039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.926 [2024-12-13 05:55:52.758056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.926 [2024-12-13 05:55:52.771686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.926 [2024-12-13 05:55:52.771703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.927 [2024-12-13 05:55:52.785987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.927 [2024-12-13 05:55:52.786004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.927 [2024-12-13 05:55:52.801807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.927 [2024-12-13 05:55:52.801824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.927 [2024-12-13 05:55:52.815670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.927 [2024-12-13 05:55:52.815688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.927 [2024-12-13 05:55:52.830065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.927 [2024-12-13 05:55:52.830082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.927 [2024-12-13 05:55:52.845436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.927 [2024-12-13 05:55:52.845461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.927 [2024-12-13 05:55:52.857303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.927 [2024-12-13 05:55:52.857324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.927 [2024-12-13 05:55:52.871344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.927 [2024-12-13 05:55:52.871363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.927 [2024-12-13 05:55:52.886009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.927 [2024-12-13 05:55:52.886026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.927 [2024-12-13 05:55:52.901548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.927 [2024-12-13 05:55:52.901567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.927 [2024-12-13 05:55:52.915412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.927 [2024-12-13 05:55:52.915431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.927 [2024-12-13 05:55:52.929829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.927 [2024-12-13 05:55:52.929847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.927 [2024-12-13 05:55:52.940833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.927 [2024-12-13 05:55:52.940851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:52.955691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:52.955709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:52.970601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:52.970619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:52.985405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:52.985424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:52.999540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:52.999558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.013916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.013932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.029833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.029850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.040009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.040027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.055038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.055056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.069717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.069745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.082373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.082392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.095100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.095119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.110124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.110142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.125426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.125444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.139674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.139692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.154253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.154270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.169704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.169732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.181577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.181595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.185 [2024-12-13 05:55:53.195455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.185 [2024-12-13 05:55:53.195472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.209949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.209967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.225283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.225302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.236584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.236601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.251113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.251131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.265278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.265296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.276507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.276524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.290817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.290834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.305252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.305269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.318638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.318656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.333390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.333407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.346620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.346637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.361687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.361715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.373975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.373992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.387006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.387022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.401428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.401446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.413632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.413650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.427676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.427694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.442356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.442373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.444 [2024-12-13 05:55:53.458104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.444 [2024-12-13 05:55:53.458121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.702 [2024-12-13 05:55:53.473515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.702 [2024-12-13 05:55:53.473534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.702 [2024-12-13 05:55:53.486521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.702 [2024-12-13 05:55:53.486539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.702 [2024-12-13 05:55:53.501754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.702 [2024-12-13 05:55:53.501773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.702 [2024-12-13 05:55:53.515470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.702 [2024-12-13 05:55:53.515503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.702 [2024-12-13 05:55:53.530290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.702 [2024-12-13 05:55:53.530308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.702 [2024-12-13 05:55:53.545291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.702 [2024-12-13 05:55:53.545309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.702 [2024-12-13 05:55:53.559312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.702 [2024-12-13 05:55:53.559329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.702 [2024-12-13 05:55:53.574023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.702 [2024-12-13 05:55:53.574041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.703 [2024-12-13 05:55:53.589421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.703 [2024-12-13 05:55:53.589440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.703 [2024-12-13 05:55:53.602430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.703 [2024-12-13 05:55:53.602456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.703 [2024-12-13 05:55:53.617652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.703 [2024-12-13 05:55:53.617671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.703 [2024-12-13 05:55:53.628767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.703 [2024-12-13 05:55:53.628785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.703 [2024-12-13 05:55:53.643592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.703 [2024-12-13 05:55:53.643612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.703 [2024-12-13 05:55:53.659026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.703 [2024-12-13 05:55:53.659044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.703 [2024-12-13 05:55:53.673457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.703 [2024-12-13 05:55:53.673476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.703 16900.40 IOPS, 132.03 MiB/s [2024-12-13T04:55:53.718Z] [2024-12-13 05:55:53.683431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.703 [2024-12-13 05:55:53.683456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.703 00:39:53.703 Latency(us) 00:39:53.703 [2024-12-13T04:55:53.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:53.703 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:53.703 Nvme1n1 : 5.01 16902.29 132.05 0.00 0.00 7565.43 1903.66 13731.35 00:39:53.703 [2024-12-13T04:55:53.718Z] =================================================================================================================== 00:39:53.703 [2024-12-13T04:55:53.718Z] Total : 16902.29 132.05 0.00 0.00 7565.43 1903.66 13731.35 00:39:53.703 [2024-12-13 05:55:53.693572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.703 [2024-12-13 05:55:53.693588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.703 [2024-12-13 05:55:53.705575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.703 [2024-12-13 05:55:53.705590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.703 [2024-12-13 05:55:53.717580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.703 [2024-12-13 05:55:53.717600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.961 [2024-12-13 05:55:53.729572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.961 [2024-12-13 05:55:53.729588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.961 [2024-12-13 05:55:53.741575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.961 [2024-12-13 05:55:53.741590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.961 [2024-12-13 05:55:53.753570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.961 [2024-12-13 05:55:53.753585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.961 [2024-12-13 05:55:53.765570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.961 [2024-12-13 05:55:53.765585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.961 [2024-12-13 05:55:53.777569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.961 [2024-12-13 05:55:53.777583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.961 [2024-12-13 05:55:53.789569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.961 [2024-12-13 05:55:53.789583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.961 [2024-12-13 05:55:53.801565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.961 [2024-12-13 05:55:53.801574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.961 [2024-12-13 05:55:53.813568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.961 [2024-12-13 05:55:53.813581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.961 [2024-12-13 05:55:53.825565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.961 [2024-12-13 05:55:53.825576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.961 [2024-12-13 05:55:53.837580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.961 [2024-12-13 05:55:53.837594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (598415) - No such process 00:39:53.961 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 598415 00:39:53.961 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:53.961 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.961 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:53.961 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.961 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:53.961 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.961 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:53.961 delay0 00:39:53.962 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.962 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:53.962 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.962 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:53.962 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.962 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:53.962 [2024-12-13 05:55:53.944754] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:02.069 Initializing NVMe Controllers 00:40:02.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:02.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:02.070 Initialization complete. Launching workers. 00:40:02.070 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 239, failed: 29398 00:40:02.070 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 29518, failed to submit 119 00:40:02.070 success 29430, unsuccessful 88, failed 0 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:02.070 rmmod nvme_tcp 00:40:02.070 rmmod nvme_fabrics 00:40:02.070 rmmod nvme_keyring 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 596622 ']' 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 596622 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 596622 ']' 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 596622 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 596622 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 596622' 00:40:02.070 killing process with pid 596622 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 596622 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 596622 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:02.070 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:03.446 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:03.446 00:40:03.446 real 0m32.078s 00:40:03.446 user 0m41.398s 00:40:03.446 sys 0m12.926s 00:40:03.446 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:03.446 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:03.446 ************************************ 00:40:03.446 END TEST nvmf_zcopy 00:40:03.446 ************************************ 00:40:03.446 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:03.446 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:03.446 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:03.446 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:03.446 ************************************ 00:40:03.446 START TEST nvmf_nmic 00:40:03.446 ************************************ 00:40:03.446 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:03.705 * Looking for test storage... 00:40:03.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:03.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.706 --rc genhtml_branch_coverage=1 00:40:03.706 --rc genhtml_function_coverage=1 00:40:03.706 --rc genhtml_legend=1 00:40:03.706 --rc geninfo_all_blocks=1 00:40:03.706 --rc geninfo_unexecuted_blocks=1 00:40:03.706 00:40:03.706 ' 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:03.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.706 --rc genhtml_branch_coverage=1 00:40:03.706 --rc genhtml_function_coverage=1 00:40:03.706 --rc genhtml_legend=1 00:40:03.706 --rc geninfo_all_blocks=1 00:40:03.706 --rc geninfo_unexecuted_blocks=1 00:40:03.706 00:40:03.706 ' 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:03.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.706 --rc genhtml_branch_coverage=1 00:40:03.706 --rc genhtml_function_coverage=1 00:40:03.706 --rc genhtml_legend=1 00:40:03.706 --rc geninfo_all_blocks=1 00:40:03.706 --rc geninfo_unexecuted_blocks=1 00:40:03.706 00:40:03.706 ' 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:03.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.706 --rc genhtml_branch_coverage=1 00:40:03.706 --rc genhtml_function_coverage=1 00:40:03.706 --rc genhtml_legend=1 00:40:03.706 --rc geninfo_all_blocks=1 00:40:03.706 --rc geninfo_unexecuted_blocks=1 00:40:03.706 00:40:03.706 ' 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:03.706 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:03.707 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:10.272 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:10.272 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:10.272 Found net devices under 0000:af:00.0: cvl_0_0 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:10.272 Found net devices under 0000:af:00.1: cvl_0_1 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:10.272 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:10.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:10.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:40:10.273 00:40:10.273 --- 10.0.0.2 ping statistics --- 00:40:10.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.273 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:10.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:10.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:40:10.273 00:40:10.273 --- 10.0.0.1 ping statistics --- 00:40:10.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.273 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=603715 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 603715 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 603715 ']' 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:10.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:10.273 [2024-12-13 05:56:09.653360] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:10.273 [2024-12-13 05:56:09.654229] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:10.273 [2024-12-13 05:56:09.654261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:10.273 [2024-12-13 05:56:09.729460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:10.273 [2024-12-13 05:56:09.752933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:10.273 [2024-12-13 05:56:09.752971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:10.273 [2024-12-13 05:56:09.752978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:10.273 [2024-12-13 05:56:09.752983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:10.273 [2024-12-13 05:56:09.752988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:10.273 [2024-12-13 05:56:09.754424] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:10.273 [2024-12-13 05:56:09.754555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:10.273 [2024-12-13 05:56:09.754587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.273 [2024-12-13 05:56:09.754589] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:10.273 [2024-12-13 05:56:09.818827] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:10.273 [2024-12-13 05:56:09.819488] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:10.273 [2024-12-13 05:56:09.819822] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:10.273 [2024-12-13 05:56:09.820284] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:10.273 [2024-12-13 05:56:09.820296] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:10.273 [2024-12-13 05:56:09.899414] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:10.273 Malloc0 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:10.273 [2024-12-13 05:56:09.983770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:10.273 test case1: single bdev can't be used in multiple subsystems 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.273 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:10.273 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.273 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:10.273 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:10.273 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.273 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:10.273 [2024-12-13 05:56:10.019133] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:10.273 [2024-12-13 05:56:10.019159] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:10.273 [2024-12-13 05:56:10.019167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.273 request: 00:40:10.273 { 00:40:10.273 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:10.273 "namespace": { 00:40:10.273 "bdev_name": "Malloc0", 00:40:10.273 "no_auto_visible": false, 00:40:10.273 "hide_metadata": false 00:40:10.273 }, 00:40:10.274 "method": "nvmf_subsystem_add_ns", 00:40:10.274 "req_id": 1 00:40:10.274 } 00:40:10.274 Got JSON-RPC error response 00:40:10.274 response: 00:40:10.274 { 00:40:10.274 "code": -32602, 00:40:10.274 "message": "Invalid parameters" 00:40:10.274 } 00:40:10.274 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:10.274 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:10.274 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:10.274 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:10.274 Adding namespace failed - expected result. 00:40:10.274 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:10.274 test case2: host connect to nvmf target in multiple paths 00:40:10.274 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:10.274 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.274 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:10.274 [2024-12-13 05:56:10.031235] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:10.274 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.274 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:10.532 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:10.532 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:10.532 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:40:10.532 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:10.532 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:10.532 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:13.059 05:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:13.059 05:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:13.059 05:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:13.059 05:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:13.059 05:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:13.059 05:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:13.059 05:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:13.059 [global] 00:40:13.059 thread=1 00:40:13.059 invalidate=1 00:40:13.059 rw=write 00:40:13.059 time_based=1 00:40:13.059 runtime=1 00:40:13.059 ioengine=libaio 00:40:13.059 direct=1 00:40:13.059 bs=4096 00:40:13.059 iodepth=1 00:40:13.059 norandommap=0 00:40:13.059 numjobs=1 00:40:13.059 00:40:13.059 verify_dump=1 00:40:13.059 verify_backlog=512 00:40:13.059 verify_state_save=0 00:40:13.059 do_verify=1 00:40:13.059 verify=crc32c-intel 00:40:13.059 [job0] 00:40:13.059 filename=/dev/nvme0n1 00:40:13.059 Could not set queue depth (nvme0n1) 00:40:13.059 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:13.059 fio-3.35 00:40:13.059 Starting 1 thread 00:40:14.433 00:40:14.433 job0: (groupid=0, jobs=1): err= 0: pid=604475: Fri Dec 13 05:56:14 2024 00:40:14.433 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:40:14.433 slat (nsec): min=9397, max=23969, avg=21969.55, stdev=2848.75 00:40:14.433 clat (usec): min=40871, max=41221, avg=40974.82, stdev=69.27 00:40:14.433 lat (usec): min=40894, max=41230, avg=40996.79, stdev=67.09 00:40:14.433 clat percentiles (usec): 00:40:14.433 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:40:14.433 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:14.433 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:14.433 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:14.433 | 99.99th=[41157] 00:40:14.433 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:40:14.433 slat (nsec): min=9253, max=39563, avg=10312.19, stdev=2008.47 00:40:14.433 clat (usec): min=123, max=287, avg=235.72, stdev=27.38 00:40:14.433 lat (usec): min=132, max=326, avg=246.03, stdev=27.51 00:40:14.433 clat percentiles (usec): 00:40:14.433 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 239], 20.00th=[ 241], 00:40:14.433 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 243], 60.00th=[ 243], 00:40:14.433 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 245], 95.00th=[ 247], 00:40:14.433 | 99.00th=[ 251], 99.50th=[ 251], 99.90th=[ 289], 99.95th=[ 289], 00:40:14.433 | 99.99th=[ 289] 00:40:14.433 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:40:14.433 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:14.433 lat (usec) : 250=94.76%, 500=1.12% 00:40:14.433 lat (msec) : 50=4.12% 00:40:14.433 cpu : usr=0.19%, sys=0.58%, ctx=534, majf=0, minf=1 00:40:14.433 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:14.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:14.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:14.433 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:14.433 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:14.433 00:40:14.433 Run status group 0 (all jobs): 00:40:14.434 READ: bw=85.5KiB/s (87.6kB/s), 85.5KiB/s-85.5KiB/s (87.6kB/s-87.6kB/s), io=88.0KiB (90.1kB), run=1029-1029msec 00:40:14.434 WRITE: bw=1990KiB/s (2038kB/s), 1990KiB/s-1990KiB/s (2038kB/s-2038kB/s), io=2048KiB (2097kB), run=1029-1029msec 00:40:14.434 00:40:14.434 Disk stats (read/write): 00:40:14.434 nvme0n1: ios=68/512, merge=0/0, ticks=754/119, in_queue=873, util=91.18% 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:14.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:14.434 rmmod nvme_tcp 00:40:14.434 rmmod nvme_fabrics 00:40:14.434 rmmod nvme_keyring 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 603715 ']' 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 603715 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 603715 ']' 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 603715 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 603715 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 603715' 00:40:14.434 killing process with pid 603715 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 603715 00:40:14.434 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 603715 00:40:14.692 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:14.692 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:14.692 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:14.692 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:14.692 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:14.692 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:14.692 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:14.692 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:14.692 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:14.692 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:14.692 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:14.692 05:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:16.595 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:16.595 00:40:16.595 real 0m13.118s 00:40:16.595 user 0m23.939s 00:40:16.595 sys 0m5.962s 00:40:16.595 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:16.595 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:16.595 ************************************ 00:40:16.595 END TEST nvmf_nmic 00:40:16.595 ************************************ 00:40:16.595 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:16.596 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:16.596 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:16.596 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:16.854 ************************************ 00:40:16.854 START TEST nvmf_fio_target 00:40:16.854 ************************************ 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:16.854 * Looking for test storage... 00:40:16.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:16.854 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:16.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:16.855 --rc genhtml_branch_coverage=1 00:40:16.855 --rc genhtml_function_coverage=1 00:40:16.855 --rc genhtml_legend=1 00:40:16.855 --rc geninfo_all_blocks=1 00:40:16.855 --rc geninfo_unexecuted_blocks=1 00:40:16.855 00:40:16.855 ' 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:16.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:16.855 --rc genhtml_branch_coverage=1 00:40:16.855 --rc genhtml_function_coverage=1 00:40:16.855 --rc genhtml_legend=1 00:40:16.855 --rc geninfo_all_blocks=1 00:40:16.855 --rc geninfo_unexecuted_blocks=1 00:40:16.855 00:40:16.855 ' 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:16.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:16.855 --rc genhtml_branch_coverage=1 00:40:16.855 --rc genhtml_function_coverage=1 00:40:16.855 --rc genhtml_legend=1 00:40:16.855 --rc geninfo_all_blocks=1 00:40:16.855 --rc geninfo_unexecuted_blocks=1 00:40:16.855 00:40:16.855 ' 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:16.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:16.855 --rc genhtml_branch_coverage=1 00:40:16.855 --rc genhtml_function_coverage=1 00:40:16.855 --rc genhtml_legend=1 00:40:16.855 --rc geninfo_all_blocks=1 00:40:16.855 --rc geninfo_unexecuted_blocks=1 00:40:16.855 00:40:16.855 ' 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:16.855 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:16.856 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:16.856 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:16.856 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:16.856 05:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:23.423 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:23.423 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:23.423 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:23.423 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:23.423 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:23.423 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:23.424 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:23.424 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:23.424 Found net devices under 0000:af:00.0: cvl_0_0 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:23.424 Found net devices under 0000:af:00.1: cvl_0_1 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:23.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:23.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:40:23.424 00:40:23.424 --- 10.0.0.2 ping statistics --- 00:40:23.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:23.424 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:23.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:23.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:40:23.424 00:40:23.424 --- 10.0.0.1 ping statistics --- 00:40:23.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:23.424 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=607957 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 607957 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 607957 ']' 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:23.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:23.424 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:23.425 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:23.425 [2024-12-13 05:56:22.693159] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:23.425 [2024-12-13 05:56:22.694117] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:23.425 [2024-12-13 05:56:22.694151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:23.425 [2024-12-13 05:56:22.772743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:23.425 [2024-12-13 05:56:22.796105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:23.425 [2024-12-13 05:56:22.796142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:23.425 [2024-12-13 05:56:22.796148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:23.425 [2024-12-13 05:56:22.796154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:23.425 [2024-12-13 05:56:22.796159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:23.425 [2024-12-13 05:56:22.797657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:23.425 [2024-12-13 05:56:22.797766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:23.425 [2024-12-13 05:56:22.797847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.425 [2024-12-13 05:56:22.797848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:23.425 [2024-12-13 05:56:22.862559] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:23.425 [2024-12-13 05:56:22.863119] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:23.425 [2024-12-13 05:56:22.863645] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:23.425 [2024-12-13 05:56:22.864028] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:23.425 [2024-12-13 05:56:22.864044] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:23.425 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:23.425 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:40:23.425 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:23.425 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:23.425 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:23.425 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:23.425 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:23.425 [2024-12-13 05:56:23.094629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:23.425 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:23.425 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:23.425 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:23.683 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:23.683 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:23.942 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:23.942 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:24.200 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:24.200 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:24.200 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:24.458 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:24.458 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:24.716 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:24.716 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:24.983 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:24.983 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:24.983 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:25.264 05:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:25.264 05:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:25.555 05:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:25.555 05:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:25.555 05:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:25.835 [2024-12-13 05:56:25.706539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:25.835 05:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:26.101 05:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:26.368 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:26.625 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:26.625 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:40:26.625 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:26.625 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:40:26.625 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:40:26.625 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:28.524 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:28.524 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:28.524 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:28.524 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:28.524 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:28.524 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:28.524 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:28.524 [global] 00:40:28.524 thread=1 00:40:28.524 invalidate=1 00:40:28.524 rw=write 00:40:28.524 time_based=1 00:40:28.524 runtime=1 00:40:28.524 ioengine=libaio 00:40:28.524 direct=1 00:40:28.524 bs=4096 00:40:28.524 iodepth=1 00:40:28.524 norandommap=0 00:40:28.524 numjobs=1 00:40:28.524 00:40:28.524 verify_dump=1 00:40:28.524 verify_backlog=512 00:40:28.524 verify_state_save=0 00:40:28.524 do_verify=1 00:40:28.524 verify=crc32c-intel 00:40:28.524 [job0] 00:40:28.524 filename=/dev/nvme0n1 00:40:28.524 [job1] 00:40:28.524 filename=/dev/nvme0n2 00:40:28.524 [job2] 00:40:28.524 filename=/dev/nvme0n3 00:40:28.524 [job3] 00:40:28.524 filename=/dev/nvme0n4 00:40:28.524 Could not set queue depth (nvme0n1) 00:40:28.524 Could not set queue depth (nvme0n2) 00:40:28.524 Could not set queue depth (nvme0n3) 00:40:28.524 Could not set queue depth (nvme0n4) 00:40:28.782 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:28.782 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:28.782 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:28.782 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:28.782 fio-3.35 00:40:28.782 Starting 4 threads 00:40:30.158 00:40:30.158 job0: (groupid=0, jobs=1): err= 0: pid=609206: Fri Dec 13 05:56:29 2024 00:40:30.158 read: IOPS=682, BW=2732KiB/s (2797kB/s)(2792KiB/1022msec) 00:40:30.158 slat (nsec): min=6512, max=25034, avg=7498.20, stdev=2518.11 00:40:30.158 clat (usec): min=214, max=41086, avg=1177.16, stdev=6095.21 00:40:30.158 lat (usec): min=221, max=41109, avg=1184.66, stdev=6097.31 00:40:30.158 clat percentiles (usec): 00:40:30.158 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 229], 20.00th=[ 233], 00:40:30.158 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:40:30.158 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 277], 00:40:30.158 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:30.158 | 99.99th=[41157] 00:40:30.158 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4096KiB/1022msec); 0 zone resets 00:40:30.158 slat (nsec): min=10295, max=35407, avg=11390.59, stdev=1279.50 00:40:30.158 clat (usec): min=148, max=1335, avg=174.88, stdev=38.43 00:40:30.158 lat (usec): min=159, max=1349, avg=186.27, stdev=38.60 00:40:30.158 clat percentiles (usec): 00:40:30.158 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 163], 00:40:30.158 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:40:30.158 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 198], 00:40:30.158 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 237], 99.95th=[ 1336], 00:40:30.158 | 99.99th=[ 1336] 00:40:30.158 bw ( KiB/s): min= 8192, max= 8192, per=31.45%, avg=8192.00, stdev= 0.00, samples=1 00:40:30.158 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:40:30.158 lat (usec) : 250=88.91%, 500=10.10% 00:40:30.158 lat (msec) : 2=0.06%, 50=0.93% 00:40:30.158 cpu : usr=1.47%, sys=1.08%, ctx=1725, majf=0, minf=1 00:40:30.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:30.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.158 issued rwts: total=698,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:30.158 job1: (groupid=0, jobs=1): err= 0: pid=609218: Fri Dec 13 05:56:29 2024 00:40:30.158 read: IOPS=2383, BW=9534KiB/s (9763kB/s)(9544KiB/1001msec) 00:40:30.158 slat (nsec): min=6144, max=26251, avg=6993.77, stdev=920.78 00:40:30.158 clat (usec): min=180, max=529, avg=229.03, stdev=36.19 00:40:30.158 lat (usec): min=187, max=536, avg=236.02, stdev=36.23 00:40:30.158 clat percentiles (usec): 00:40:30.158 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 212], 00:40:30.158 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 223], 00:40:30.158 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 277], 00:40:30.158 | 99.00th=[ 453], 99.50th=[ 490], 99.90th=[ 519], 99.95th=[ 519], 00:40:30.158 | 99.99th=[ 529] 00:40:30.158 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:40:30.158 slat (nsec): min=9028, max=38040, avg=10062.26, stdev=1110.44 00:40:30.158 clat (usec): min=124, max=345, avg=156.42, stdev=20.58 00:40:30.158 lat (usec): min=134, max=359, avg=166.49, stdev=20.71 00:40:30.158 clat percentiles (usec): 00:40:30.158 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:40:30.158 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:40:30.158 | 70.00th=[ 159], 80.00th=[ 172], 90.00th=[ 188], 95.00th=[ 196], 00:40:30.158 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 247], 99.95th=[ 258], 00:40:30.158 | 99.99th=[ 347] 00:40:30.158 bw ( KiB/s): min=11528, max=11528, per=44.25%, avg=11528.00, stdev= 0.00, samples=1 00:40:30.158 iops : min= 2882, max= 2882, avg=2882.00, stdev= 0.00, samples=1 00:40:30.158 lat (usec) : 250=94.30%, 500=5.54%, 750=0.16% 00:40:30.158 cpu : usr=2.00%, sys=5.00%, ctx=4946, majf=0, minf=2 00:40:30.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:30.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.158 issued rwts: total=2386,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:30.158 job2: (groupid=0, jobs=1): err= 0: pid=609234: Fri Dec 13 05:56:29 2024 00:40:30.158 read: IOPS=34, BW=138KiB/s (141kB/s)(140KiB/1016msec) 00:40:30.158 slat (nsec): min=7629, max=33891, avg=18566.17, stdev=7154.53 00:40:30.158 clat (usec): min=232, max=45022, avg=25943.83, stdev=20040.19 00:40:30.158 lat (usec): min=241, max=45050, avg=25962.39, stdev=20043.76 00:40:30.158 clat percentiles (usec): 00:40:30.158 | 1.00th=[ 233], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 251], 00:40:30.158 | 30.00th=[ 269], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:40:30.158 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:30.158 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:40:30.158 | 99.99th=[44827] 00:40:30.158 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:40:30.158 slat (nsec): min=10214, max=39607, avg=12559.30, stdev=2174.78 00:40:30.158 clat (usec): min=152, max=368, avg=194.23, stdev=26.01 00:40:30.158 lat (usec): min=165, max=407, avg=206.79, stdev=26.28 00:40:30.158 clat percentiles (usec): 00:40:30.158 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:40:30.158 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:40:30.158 | 70.00th=[ 198], 80.00th=[ 215], 90.00th=[ 241], 95.00th=[ 241], 00:40:30.158 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 367], 99.95th=[ 367], 00:40:30.158 | 99.99th=[ 367] 00:40:30.158 bw ( KiB/s): min= 4096, max= 4096, per=15.72%, avg=4096.00, stdev= 0.00, samples=1 00:40:30.158 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:30.158 lat (usec) : 250=93.97%, 500=2.01% 00:40:30.158 lat (msec) : 50=4.02% 00:40:30.158 cpu : usr=0.49%, sys=0.99%, ctx=547, majf=0, minf=2 00:40:30.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:30.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.158 issued rwts: total=35,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:30.158 job3: (groupid=0, jobs=1): err= 0: pid=609239: Fri Dec 13 05:56:29 2024 00:40:30.158 read: IOPS=2046, BW=8188KiB/s (8384kB/s)(8196KiB/1001msec) 00:40:30.158 slat (nsec): min=7454, max=81204, avg=8704.06, stdev=2226.54 00:40:30.158 clat (usec): min=199, max=490, avg=230.91, stdev=27.14 00:40:30.158 lat (usec): min=207, max=508, avg=239.62, stdev=27.57 00:40:30.158 clat percentiles (usec): 00:40:30.158 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 215], 00:40:30.158 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 229], 00:40:30.158 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 269], 00:40:30.158 | 99.00th=[ 322], 99.50th=[ 437], 99.90th=[ 486], 99.95th=[ 490], 00:40:30.158 | 99.99th=[ 490] 00:40:30.158 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:40:30.158 slat (nsec): min=9995, max=40454, avg=11739.61, stdev=1808.24 00:40:30.158 clat (usec): min=126, max=433, avg=181.96, stdev=31.51 00:40:30.158 lat (usec): min=136, max=443, avg=193.70, stdev=31.77 00:40:30.158 clat percentiles (usec): 00:40:30.158 | 1.00th=[ 133], 5.00th=[ 145], 10.00th=[ 153], 20.00th=[ 159], 00:40:30.158 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 178], 60.00th=[ 184], 00:40:30.158 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 221], 95.00th=[ 258], 00:40:30.158 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 355], 99.95th=[ 367], 00:40:30.158 | 99.99th=[ 433] 00:40:30.158 bw ( KiB/s): min= 9592, max= 9592, per=36.82%, avg=9592.00, stdev= 0.00, samples=1 00:40:30.158 iops : min= 2398, max= 2398, avg=2398.00, stdev= 0.00, samples=1 00:40:30.158 lat (usec) : 250=90.65%, 500=9.35% 00:40:30.158 cpu : usr=3.90%, sys=7.50%, ctx=4609, majf=0, minf=1 00:40:30.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:30.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.158 issued rwts: total=2049,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:30.158 00:40:30.158 Run status group 0 (all jobs): 00:40:30.158 READ: bw=19.8MiB/s (20.7MB/s), 138KiB/s-9534KiB/s (141kB/s-9763kB/s), io=20.2MiB (21.2MB), run=1001-1022msec 00:40:30.158 WRITE: bw=25.4MiB/s (26.7MB/s), 2016KiB/s-9.99MiB/s (2064kB/s-10.5MB/s), io=26.0MiB (27.3MB), run=1001-1022msec 00:40:30.158 00:40:30.158 Disk stats (read/write): 00:40:30.158 nvme0n1: ios=745/1024, merge=0/0, ticks=1168/177, in_queue=1345, util=97.90% 00:40:30.158 nvme0n2: ios=2061/2216, merge=0/0, ticks=453/331, in_queue=784, util=86.79% 00:40:30.158 nvme0n3: ios=31/512, merge=0/0, ticks=740/100, in_queue=840, util=88.95% 00:40:30.158 nvme0n4: ios=1787/2048, merge=0/0, ticks=389/368, in_queue=757, util=89.60% 00:40:30.158 05:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:30.158 [global] 00:40:30.158 thread=1 00:40:30.158 invalidate=1 00:40:30.158 rw=randwrite 00:40:30.158 time_based=1 00:40:30.158 runtime=1 00:40:30.158 ioengine=libaio 00:40:30.158 direct=1 00:40:30.158 bs=4096 00:40:30.159 iodepth=1 00:40:30.159 norandommap=0 00:40:30.159 numjobs=1 00:40:30.159 00:40:30.159 verify_dump=1 00:40:30.159 verify_backlog=512 00:40:30.159 verify_state_save=0 00:40:30.159 do_verify=1 00:40:30.159 verify=crc32c-intel 00:40:30.159 [job0] 00:40:30.159 filename=/dev/nvme0n1 00:40:30.159 [job1] 00:40:30.159 filename=/dev/nvme0n2 00:40:30.159 [job2] 00:40:30.159 filename=/dev/nvme0n3 00:40:30.159 [job3] 00:40:30.159 filename=/dev/nvme0n4 00:40:30.159 Could not set queue depth (nvme0n1) 00:40:30.159 Could not set queue depth (nvme0n2) 00:40:30.159 Could not set queue depth (nvme0n3) 00:40:30.159 Could not set queue depth (nvme0n4) 00:40:30.417 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:30.417 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:30.417 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:30.417 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:30.417 fio-3.35 00:40:30.417 Starting 4 threads 00:40:31.793 00:40:31.793 job0: (groupid=0, jobs=1): err= 0: pid=609621: Fri Dec 13 05:56:31 2024 00:40:31.793 read: IOPS=2319, BW=9279KiB/s (9501kB/s)(9288KiB/1001msec) 00:40:31.793 slat (nsec): min=6620, max=26046, avg=7505.18, stdev=763.19 00:40:31.793 clat (usec): min=188, max=390, avg=247.03, stdev= 8.24 00:40:31.793 lat (usec): min=195, max=400, avg=254.54, stdev= 8.21 00:40:31.793 clat percentiles (usec): 00:40:31.793 | 1.00th=[ 212], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 243], 00:40:31.793 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 247], 60.00th=[ 249], 00:40:31.793 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 253], 95.00th=[ 255], 00:40:31.793 | 99.00th=[ 262], 99.50th=[ 265], 99.90th=[ 306], 99.95th=[ 310], 00:40:31.793 | 99.99th=[ 392] 00:40:31.793 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:40:31.793 slat (nsec): min=9340, max=41485, avg=10510.03, stdev=1593.79 00:40:31.793 clat (usec): min=115, max=328, avg=145.02, stdev=32.22 00:40:31.793 lat (usec): min=125, max=360, avg=155.53, stdev=32.52 00:40:31.793 clat percentiles (usec): 00:40:31.793 | 1.00th=[ 121], 5.00th=[ 124], 10.00th=[ 125], 20.00th=[ 128], 00:40:31.793 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 135], 00:40:31.793 | 70.00th=[ 137], 80.00th=[ 151], 90.00th=[ 194], 95.00th=[ 237], 00:40:31.793 | 99.00th=[ 249], 99.50th=[ 262], 99.90th=[ 285], 99.95th=[ 314], 00:40:31.793 | 99.99th=[ 330] 00:40:31.793 bw ( KiB/s): min=11176, max=11176, per=69.37%, avg=11176.00, stdev= 0.00, samples=1 00:40:31.793 iops : min= 2794, max= 2794, avg=2794.00, stdev= 0.00, samples=1 00:40:31.793 lat (usec) : 250=86.75%, 500=13.25% 00:40:31.793 cpu : usr=1.90%, sys=5.10%, ctx=4884, majf=0, minf=1 00:40:31.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:31.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.794 issued rwts: total=2322,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:31.794 job1: (groupid=0, jobs=1): err= 0: pid=609633: Fri Dec 13 05:56:31 2024 00:40:31.794 read: IOPS=39, BW=157KiB/s (161kB/s)(160KiB/1017msec) 00:40:31.794 slat (nsec): min=7615, max=28248, avg=15857.00, stdev=6387.84 00:40:31.794 clat (usec): min=241, max=45049, avg=22809.97, stdev=20591.63 00:40:31.794 lat (usec): min=251, max=45077, avg=22825.82, stdev=20592.95 00:40:31.794 clat percentiles (usec): 00:40:31.794 | 1.00th=[ 241], 5.00th=[ 243], 10.00th=[ 265], 20.00th=[ 355], 00:40:31.794 | 30.00th=[ 375], 40.00th=[ 396], 50.00th=[40633], 60.00th=[40633], 00:40:31.794 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:31.794 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:40:31.794 | 99.99th=[44827] 00:40:31.794 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:40:31.794 slat (nsec): min=9939, max=69752, avg=12477.94, stdev=3252.61 00:40:31.794 clat (usec): min=146, max=484, avg=185.97, stdev=32.13 00:40:31.794 lat (usec): min=156, max=495, avg=198.45, stdev=32.46 00:40:31.794 clat percentiles (usec): 00:40:31.794 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:40:31.794 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:40:31.794 | 70.00th=[ 186], 80.00th=[ 198], 90.00th=[ 235], 95.00th=[ 243], 00:40:31.794 | 99.00th=[ 281], 99.50th=[ 379], 99.90th=[ 486], 99.95th=[ 486], 00:40:31.794 | 99.99th=[ 486] 00:40:31.794 bw ( KiB/s): min= 4096, max= 4096, per=25.43%, avg=4096.00, stdev= 0.00, samples=1 00:40:31.794 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:31.794 lat (usec) : 250=90.58%, 500=5.43% 00:40:31.794 lat (msec) : 50=3.99% 00:40:31.794 cpu : usr=0.59%, sys=0.89%, ctx=553, majf=0, minf=1 00:40:31.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:31.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.794 issued rwts: total=40,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:31.794 job2: (groupid=0, jobs=1): err= 0: pid=609636: Fri Dec 13 05:56:31 2024 00:40:31.794 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:40:31.794 slat (nsec): min=9594, max=25788, avg=22457.59, stdev=3014.73 00:40:31.794 clat (usec): min=40863, max=41112, avg=40967.70, stdev=52.82 00:40:31.794 lat (usec): min=40885, max=41137, avg=40990.16, stdev=52.62 00:40:31.794 clat percentiles (usec): 00:40:31.794 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:40:31.794 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:31.794 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:31.794 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:31.794 | 99.99th=[41157] 00:40:31.794 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:40:31.794 slat (nsec): min=10492, max=45908, avg=11810.34, stdev=2402.05 00:40:31.794 clat (usec): min=154, max=294, avg=185.30, stdev=13.72 00:40:31.794 lat (usec): min=167, max=340, avg=197.11, stdev=14.58 00:40:31.794 clat percentiles (usec): 00:40:31.794 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:40:31.794 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 186], 00:40:31.794 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 208], 00:40:31.794 | 99.00th=[ 223], 99.50th=[ 253], 99.90th=[ 293], 99.95th=[ 293], 00:40:31.794 | 99.99th=[ 293] 00:40:31.794 bw ( KiB/s): min= 4096, max= 4096, per=25.43%, avg=4096.00, stdev= 0.00, samples=1 00:40:31.794 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:31.794 lat (usec) : 250=95.32%, 500=0.56% 00:40:31.794 lat (msec) : 50=4.12% 00:40:31.794 cpu : usr=0.50%, sys=0.80%, ctx=536, majf=0, minf=1 00:40:31.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:31.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.794 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:31.794 job3: (groupid=0, jobs=1): err= 0: pid=609637: Fri Dec 13 05:56:31 2024 00:40:31.794 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:40:31.794 slat (nsec): min=11910, max=24341, avg=22584.00, stdev=2426.50 00:40:31.794 clat (usec): min=40785, max=41082, avg=40959.89, stdev=68.00 00:40:31.794 lat (usec): min=40808, max=41106, avg=40982.47, stdev=68.89 00:40:31.794 clat percentiles (usec): 00:40:31.794 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:31.794 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:31.794 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:31.794 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:31.794 | 99.99th=[41157] 00:40:31.794 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:40:31.794 slat (nsec): min=12793, max=38864, avg=13939.00, stdev=1947.13 00:40:31.794 clat (usec): min=141, max=296, avg=196.70, stdev=30.01 00:40:31.794 lat (usec): min=155, max=309, avg=210.64, stdev=30.32 00:40:31.794 clat percentiles (usec): 00:40:31.794 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 169], 20.00th=[ 176], 00:40:31.794 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:40:31.794 | 70.00th=[ 206], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 255], 00:40:31.794 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 297], 99.95th=[ 297], 00:40:31.794 | 99.99th=[ 297] 00:40:31.794 bw ( KiB/s): min= 4096, max= 4096, per=25.43%, avg=4096.00, stdev= 0.00, samples=1 00:40:31.794 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:31.794 lat (usec) : 250=89.89%, 500=5.99% 00:40:31.794 lat (msec) : 50=4.12% 00:40:31.794 cpu : usr=0.30%, sys=1.19%, ctx=535, majf=0, minf=1 00:40:31.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:31.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.794 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:31.794 00:40:31.794 Run status group 0 (all jobs): 00:40:31.794 READ: bw=9463KiB/s (9690kB/s), 87.0KiB/s-9279KiB/s (89.0kB/s-9501kB/s), io=9624KiB (9855kB), run=1001-1017msec 00:40:31.794 WRITE: bw=15.7MiB/s (16.5MB/s), 2014KiB/s-9.99MiB/s (2062kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1017msec 00:40:31.794 00:40:31.794 Disk stats (read/write): 00:40:31.794 nvme0n1: ios=1898/2048, merge=0/0, ticks=668/294, in_queue=962, util=96.59% 00:40:31.794 nvme0n2: ios=73/512, merge=0/0, ticks=725/84, in_queue=809, util=84.19% 00:40:31.794 nvme0n3: ios=52/512, merge=0/0, ticks=1412/93, in_queue=1505, util=98.70% 00:40:31.794 nvme0n4: ios=74/512, merge=0/0, ticks=1421/97, in_queue=1518, util=97.80% 00:40:31.794 05:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:31.794 [global] 00:40:31.794 thread=1 00:40:31.794 invalidate=1 00:40:31.794 rw=write 00:40:31.794 time_based=1 00:40:31.794 runtime=1 00:40:31.794 ioengine=libaio 00:40:31.794 direct=1 00:40:31.794 bs=4096 00:40:31.794 iodepth=128 00:40:31.794 norandommap=0 00:40:31.794 numjobs=1 00:40:31.794 00:40:31.794 verify_dump=1 00:40:31.794 verify_backlog=512 00:40:31.794 verify_state_save=0 00:40:31.794 do_verify=1 00:40:31.794 verify=crc32c-intel 00:40:31.794 [job0] 00:40:31.794 filename=/dev/nvme0n1 00:40:31.794 [job1] 00:40:31.794 filename=/dev/nvme0n2 00:40:31.794 [job2] 00:40:31.794 filename=/dev/nvme0n3 00:40:31.794 [job3] 00:40:31.794 filename=/dev/nvme0n4 00:40:31.794 Could not set queue depth (nvme0n1) 00:40:31.794 Could not set queue depth (nvme0n2) 00:40:31.794 Could not set queue depth (nvme0n3) 00:40:31.794 Could not set queue depth (nvme0n4) 00:40:32.053 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:32.053 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:32.053 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:32.053 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:32.053 fio-3.35 00:40:32.053 Starting 4 threads 00:40:33.439 00:40:33.439 job0: (groupid=0, jobs=1): err= 0: pid=609997: Fri Dec 13 05:56:33 2024 00:40:33.439 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:40:33.439 slat (nsec): min=1687, max=18019k, avg=113666.45, stdev=932251.06 00:40:33.439 clat (usec): min=3224, max=55047, avg=14335.22, stdev=5981.85 00:40:33.439 lat (usec): min=3233, max=55052, avg=14448.89, stdev=6060.67 00:40:33.439 clat percentiles (usec): 00:40:33.439 | 1.00th=[ 6980], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10683], 00:40:33.439 | 30.00th=[10945], 40.00th=[11338], 50.00th=[12387], 60.00th=[13829], 00:40:33.439 | 70.00th=[15139], 80.00th=[17433], 90.00th=[20841], 95.00th=[23987], 00:40:33.439 | 99.00th=[42206], 99.50th=[48497], 99.90th=[54789], 99.95th=[54789], 00:40:33.439 | 99.99th=[54789] 00:40:33.439 write: IOPS=3703, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1008msec); 0 zone resets 00:40:33.439 slat (usec): min=2, max=10342, avg=147.50, stdev=841.58 00:40:33.439 clat (usec): min=513, max=90135, avg=20460.24, stdev=18055.79 00:40:33.439 lat (usec): min=544, max=90148, avg=20607.73, stdev=18164.93 00:40:33.439 clat percentiles (usec): 00:40:33.439 | 1.00th=[ 1647], 5.00th=[ 5538], 10.00th=[ 6980], 20.00th=[ 9372], 00:40:33.439 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11469], 60.00th=[13435], 00:40:33.439 | 70.00th=[17171], 80.00th=[36963], 90.00th=[52167], 95.00th=[53216], 00:40:33.439 | 99.00th=[84411], 99.50th=[88605], 99.90th=[89654], 99.95th=[89654], 00:40:33.439 | 99.99th=[89654] 00:40:33.439 bw ( KiB/s): min=12096, max=16752, per=18.52%, avg=14424.00, stdev=3292.29, samples=2 00:40:33.439 iops : min= 3024, max= 4188, avg=3606.00, stdev=823.07, samples=2 00:40:33.439 lat (usec) : 750=0.11%, 1000=0.03% 00:40:33.439 lat (msec) : 2=0.53%, 4=1.04%, 10=14.81%, 20=62.55%, 50=14.61% 00:40:33.439 lat (msec) : 100=6.31% 00:40:33.439 cpu : usr=2.58%, sys=5.36%, ctx=328, majf=0, minf=1 00:40:33.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:40:33.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:33.439 issued rwts: total=3584,3733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:33.439 job1: (groupid=0, jobs=1): err= 0: pid=609998: Fri Dec 13 05:56:33 2024 00:40:33.439 read: IOPS=5693, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1002msec) 00:40:33.439 slat (nsec): min=1136, max=11352k, avg=84300.99, stdev=574301.90 00:40:33.439 clat (usec): min=646, max=49388, avg=11018.33, stdev=3126.60 00:40:33.439 lat (usec): min=3364, max=49981, avg=11102.63, stdev=3162.99 00:40:33.439 clat percentiles (usec): 00:40:33.439 | 1.00th=[ 6521], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 9634], 00:40:33.439 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10814], 00:40:33.439 | 70.00th=[11469], 80.00th=[12256], 90.00th=[14091], 95.00th=[16319], 00:40:33.439 | 99.00th=[20841], 99.50th=[25297], 99.90th=[49546], 99.95th=[49546], 00:40:33.439 | 99.99th=[49546] 00:40:33.439 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:40:33.439 slat (nsec): min=1993, max=9134.4k, avg=74716.42, stdev=475716.50 00:40:33.439 clat (usec): min=346, max=31815, avg=10397.29, stdev=3279.30 00:40:33.439 lat (usec): min=358, max=31832, avg=10472.01, stdev=3308.88 00:40:33.439 clat percentiles (usec): 00:40:33.439 | 1.00th=[ 1729], 5.00th=[ 5932], 10.00th=[ 8586], 20.00th=[ 9372], 00:40:33.439 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:40:33.439 | 70.00th=[10814], 80.00th=[11338], 90.00th=[12387], 95.00th=[15270], 00:40:33.439 | 99.00th=[22938], 99.50th=[30016], 99.90th=[31851], 99.95th=[31851], 00:40:33.439 | 99.99th=[31851] 00:40:33.439 bw ( KiB/s): min=24192, max=24528, per=31.27%, avg=24360.00, stdev=237.59, samples=2 00:40:33.439 iops : min= 6048, max= 6132, avg=6090.00, stdev=59.40, samples=2 00:40:33.439 lat (usec) : 500=0.02%, 750=0.07%, 1000=0.13% 00:40:33.439 lat (msec) : 2=0.67%, 4=1.41%, 10=42.19%, 20=53.53%, 50=1.99% 00:40:33.439 cpu : usr=3.10%, sys=6.79%, ctx=540, majf=0, minf=1 00:40:33.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:33.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:33.439 issued rwts: total=5705,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:33.439 job2: (groupid=0, jobs=1): err= 0: pid=609999: Fri Dec 13 05:56:33 2024 00:40:33.439 read: IOPS=5133, BW=20.1MiB/s (21.0MB/s)(20.2MiB/1005msec) 00:40:33.439 slat (nsec): min=1577, max=5945.9k, avg=89316.89, stdev=574187.10 00:40:33.439 clat (usec): min=4039, max=18599, avg=11592.06, stdev=2080.41 00:40:33.439 lat (usec): min=4822, max=18609, avg=11681.37, stdev=2111.17 00:40:33.439 clat percentiles (usec): 00:40:33.439 | 1.00th=[ 7111], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[ 9896], 00:40:33.439 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11600], 60.00th=[11863], 00:40:33.439 | 70.00th=[12387], 80.00th=[12780], 90.00th=[14353], 95.00th=[15008], 00:40:33.439 | 99.00th=[17695], 99.50th=[17695], 99.90th=[18220], 99.95th=[18482], 00:40:33.439 | 99.99th=[18482] 00:40:33.439 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:40:33.439 slat (usec): min=2, max=6451, avg=89.89, stdev=588.11 00:40:33.439 clat (usec): min=5108, max=19598, avg=11971.09, stdev=1465.58 00:40:33.439 lat (usec): min=5119, max=19607, avg=12060.98, stdev=1553.67 00:40:33.439 clat percentiles (usec): 00:40:33.439 | 1.00th=[ 6915], 5.00th=[10159], 10.00th=[10552], 20.00th=[11207], 00:40:33.439 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[12387], 00:40:33.439 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13304], 95.00th=[13829], 00:40:33.439 | 99.00th=[16712], 99.50th=[18220], 99.90th=[19006], 99.95th=[19268], 00:40:33.439 | 99.99th=[19530] 00:40:33.439 bw ( KiB/s): min=21720, max=22632, per=28.47%, avg=22176.00, stdev=644.88, samples=2 00:40:33.439 iops : min= 5430, max= 5658, avg=5544.00, stdev=161.22, samples=2 00:40:33.439 lat (msec) : 10=13.29%, 20=86.71% 00:40:33.439 cpu : usr=4.78%, sys=6.18%, ctx=378, majf=0, minf=2 00:40:33.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:40:33.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:33.440 issued rwts: total=5159,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:33.440 job3: (groupid=0, jobs=1): err= 0: pid=610000: Fri Dec 13 05:56:33 2024 00:40:33.440 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:40:33.440 slat (nsec): min=1672, max=22452k, avg=120477.39, stdev=893110.22 00:40:33.440 clat (usec): min=7733, max=47897, avg=15729.20, stdev=6465.56 00:40:33.440 lat (usec): min=7737, max=47917, avg=15849.68, stdev=6534.36 00:40:33.440 clat percentiles (usec): 00:40:33.440 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[11338], 20.00th=[12125], 00:40:33.440 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13698], 60.00th=[14091], 00:40:33.440 | 70.00th=[14746], 80.00th=[17171], 90.00th=[26870], 95.00th=[32375], 00:40:33.440 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39584], 99.95th=[40109], 00:40:33.440 | 99.99th=[47973] 00:40:33.440 write: IOPS=4112, BW=16.1MiB/s (16.8MB/s)(16.1MiB/1002msec); 0 zone resets 00:40:33.440 slat (usec): min=2, max=22809, avg=115.51, stdev=874.19 00:40:33.440 clat (usec): min=1829, max=60714, avg=15103.43, stdev=7098.01 00:40:33.440 lat (usec): min=1839, max=60747, avg=15218.95, stdev=7192.75 00:40:33.440 clat percentiles (usec): 00:40:33.440 | 1.00th=[ 7439], 5.00th=[10683], 10.00th=[11469], 20.00th=[12125], 00:40:33.440 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:40:33.440 | 70.00th=[13304], 80.00th=[13566], 90.00th=[27657], 95.00th=[32113], 00:40:33.440 | 99.00th=[42730], 99.50th=[42730], 99.90th=[50594], 99.95th=[53216], 00:40:33.440 | 99.99th=[60556] 00:40:33.440 bw ( KiB/s): min=16384, max=16384, per=21.03%, avg=16384.00, stdev= 0.00, samples=2 00:40:33.440 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:40:33.440 lat (msec) : 2=0.12%, 4=0.09%, 10=4.21%, 20=81.78%, 50=13.67% 00:40:33.440 lat (msec) : 100=0.13% 00:40:33.440 cpu : usr=3.80%, sys=5.59%, ctx=299, majf=0, minf=1 00:40:33.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:40:33.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:33.440 issued rwts: total=4096,4121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:33.440 00:40:33.440 Run status group 0 (all jobs): 00:40:33.440 READ: bw=71.9MiB/s (75.4MB/s), 13.9MiB/s-22.2MiB/s (14.6MB/s-23.3MB/s), io=72.4MiB (76.0MB), run=1002-1008msec 00:40:33.440 WRITE: bw=76.1MiB/s (79.8MB/s), 14.5MiB/s-24.0MiB/s (15.2MB/s-25.1MB/s), io=76.7MiB (80.4MB), run=1002-1008msec 00:40:33.440 00:40:33.440 Disk stats (read/write): 00:40:33.440 nvme0n1: ios=3092/3375, merge=0/0, ticks=41401/62328, in_queue=103729, util=85.87% 00:40:33.440 nvme0n2: ios=4870/5120, merge=0/0, ticks=41206/38862, in_queue=80068, util=89.74% 00:40:33.440 nvme0n3: ios=4550/4608, merge=0/0, ticks=25765/25639, in_queue=51404, util=94.70% 00:40:33.440 nvme0n4: ios=3595/3591, merge=0/0, ticks=26160/25424, in_queue=51584, util=94.03% 00:40:33.440 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:33.440 [global] 00:40:33.440 thread=1 00:40:33.440 invalidate=1 00:40:33.440 rw=randwrite 00:40:33.440 time_based=1 00:40:33.440 runtime=1 00:40:33.440 ioengine=libaio 00:40:33.440 direct=1 00:40:33.440 bs=4096 00:40:33.440 iodepth=128 00:40:33.440 norandommap=0 00:40:33.440 numjobs=1 00:40:33.440 00:40:33.440 verify_dump=1 00:40:33.440 verify_backlog=512 00:40:33.440 verify_state_save=0 00:40:33.440 do_verify=1 00:40:33.440 verify=crc32c-intel 00:40:33.440 [job0] 00:40:33.440 filename=/dev/nvme0n1 00:40:33.440 [job1] 00:40:33.440 filename=/dev/nvme0n2 00:40:33.440 [job2] 00:40:33.440 filename=/dev/nvme0n3 00:40:33.440 [job3] 00:40:33.440 filename=/dev/nvme0n4 00:40:33.440 Could not set queue depth (nvme0n1) 00:40:33.440 Could not set queue depth (nvme0n2) 00:40:33.440 Could not set queue depth (nvme0n3) 00:40:33.440 Could not set queue depth (nvme0n4) 00:40:33.705 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:33.705 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:33.705 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:33.705 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:33.705 fio-3.35 00:40:33.705 Starting 4 threads 00:40:35.111 00:40:35.111 job0: (groupid=0, jobs=1): err= 0: pid=610364: Fri Dec 13 05:56:34 2024 00:40:35.111 read: IOPS=5313, BW=20.8MiB/s (21.8MB/s)(21.7MiB/1045msec) 00:40:35.111 slat (nsec): min=1347, max=9381.2k, avg=83348.47, stdev=556851.49 00:40:35.111 clat (usec): min=5589, max=55325, avg=11779.35, stdev=6360.21 00:40:35.111 lat (usec): min=5591, max=55331, avg=11862.70, stdev=6378.50 00:40:35.111 clat percentiles (usec): 00:40:35.111 | 1.00th=[ 6718], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 8586], 00:40:35.111 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[11076], 00:40:35.111 | 70.00th=[12125], 80.00th=[13173], 90.00th=[15664], 95.00th=[18220], 00:40:35.111 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:40:35.111 | 99.99th=[55313] 00:40:35.111 write: IOPS=5389, BW=21.1MiB/s (22.1MB/s)(22.0MiB/1045msec); 0 zone resets 00:40:35.111 slat (usec): min=2, max=7069, avg=89.83, stdev=513.50 00:40:35.111 clat (usec): min=4498, max=44246, avg=11880.77, stdev=6220.41 00:40:35.111 lat (usec): min=4506, max=44256, avg=11970.60, stdev=6273.95 00:40:35.111 clat percentiles (usec): 00:40:35.111 | 1.00th=[ 5604], 5.00th=[ 7504], 10.00th=[ 8356], 20.00th=[ 9241], 00:40:35.111 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:40:35.111 | 70.00th=[10683], 80.00th=[14353], 90.00th=[15008], 95.00th=[23725], 00:40:35.111 | 99.00th=[40109], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:40:35.111 | 99.99th=[44303] 00:40:35.111 bw ( KiB/s): min=19960, max=25096, per=32.90%, avg=22528.00, stdev=3631.70, samples=2 00:40:35.111 iops : min= 4990, max= 6274, avg=5632.00, stdev=907.93, samples=2 00:40:35.111 lat (msec) : 10=53.18%, 20=42.27%, 50=3.99%, 100=0.56% 00:40:35.111 cpu : usr=5.17%, sys=5.27%, ctx=516, majf=0, minf=1 00:40:35.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:40:35.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:35.111 issued rwts: total=5553,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:35.111 job1: (groupid=0, jobs=1): err= 0: pid=610365: Fri Dec 13 05:56:34 2024 00:40:35.111 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:40:35.111 slat (nsec): min=1582, max=18990k, avg=167403.13, stdev=1113199.73 00:40:35.111 clat (usec): min=9148, max=82423, avg=21790.10, stdev=15481.80 00:40:35.111 lat (usec): min=9155, max=82429, avg=21957.50, stdev=15595.52 00:40:35.111 clat percentiles (usec): 00:40:35.111 | 1.00th=[ 9503], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[11731], 00:40:35.111 | 30.00th=[12649], 40.00th=[13960], 50.00th=[15926], 60.00th=[17695], 00:40:35.111 | 70.00th=[20317], 80.00th=[27395], 90.00th=[44827], 95.00th=[62129], 00:40:35.111 | 99.00th=[73925], 99.50th=[73925], 99.90th=[82314], 99.95th=[82314], 00:40:35.111 | 99.99th=[82314] 00:40:35.111 write: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1003msec); 0 zone resets 00:40:35.111 slat (usec): min=2, max=9867, avg=182.22, stdev=914.70 00:40:35.111 clat (usec): min=443, max=83774, avg=23343.95, stdev=18349.36 00:40:35.111 lat (usec): min=3327, max=83779, avg=23526.17, stdev=18467.77 00:40:35.111 clat percentiles (usec): 00:40:35.111 | 1.00th=[ 3752], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[ 9896], 00:40:35.111 | 30.00th=[10290], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:40:35.111 | 70.00th=[21365], 80.00th=[44827], 90.00th=[53216], 95.00th=[54789], 00:40:35.111 | 99.00th=[77071], 99.50th=[77071], 99.90th=[83362], 99.95th=[83362], 00:40:35.111 | 99.99th=[83362] 00:40:35.111 bw ( KiB/s): min= 7616, max=15696, per=17.02%, avg=11656.00, stdev=5713.42, samples=2 00:40:35.111 iops : min= 1904, max= 3924, avg=2914.00, stdev=1428.36, samples=2 00:40:35.111 lat (usec) : 500=0.02% 00:40:35.111 lat (msec) : 4=0.75%, 10=14.98%, 20=50.36%, 50=20.46%, 100=13.44% 00:40:35.111 cpu : usr=2.50%, sys=3.99%, ctx=277, majf=0, minf=1 00:40:35.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:40:35.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:35.111 issued rwts: total=2560,3042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:35.111 job2: (groupid=0, jobs=1): err= 0: pid=610366: Fri Dec 13 05:56:34 2024 00:40:35.111 read: IOPS=4812, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1005msec) 00:40:35.111 slat (nsec): min=1074, max=12273k, avg=92291.73, stdev=738830.68 00:40:35.111 clat (usec): min=2644, max=42005, avg=12367.98, stdev=4614.56 00:40:35.111 lat (usec): min=3026, max=46221, avg=12460.28, stdev=4678.19 00:40:35.111 clat percentiles (usec): 00:40:35.111 | 1.00th=[ 3163], 5.00th=[ 4883], 10.00th=[ 8455], 20.00th=[ 9896], 00:40:35.111 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11076], 60.00th=[12387], 00:40:35.111 | 70.00th=[13435], 80.00th=[15401], 90.00th=[17957], 95.00th=[20579], 00:40:35.111 | 99.00th=[29230], 99.50th=[33162], 99.90th=[42206], 99.95th=[42206], 00:40:35.111 | 99.99th=[42206] 00:40:35.111 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:40:35.111 slat (usec): min=2, max=12057, avg=97.82, stdev=684.15 00:40:35.111 clat (usec): min=982, max=47730, avg=13210.58, stdev=8812.23 00:40:35.111 lat (usec): min=992, max=47739, avg=13308.40, stdev=8875.24 00:40:35.111 clat percentiles (usec): 00:40:35.111 | 1.00th=[ 4359], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 9241], 00:40:35.111 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[10945], 60.00th=[11076], 00:40:35.111 | 70.00th=[11469], 80.00th=[13698], 90.00th=[17695], 95.00th=[39584], 00:40:35.111 | 99.00th=[44303], 99.50th=[44827], 99.90th=[47973], 99.95th=[47973], 00:40:35.111 | 99.99th=[47973] 00:40:35.111 bw ( KiB/s): min=18416, max=22544, per=29.91%, avg=20480.00, stdev=2918.94, samples=2 00:40:35.111 iops : min= 4604, max= 5636, avg=5120.00, stdev=729.73, samples=2 00:40:35.111 lat (usec) : 1000=0.02% 00:40:35.111 lat (msec) : 2=0.01%, 4=1.97%, 10=24.23%, 20=66.36%, 50=7.41% 00:40:35.111 cpu : usr=2.99%, sys=6.47%, ctx=344, majf=0, minf=2 00:40:35.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:40:35.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:35.111 issued rwts: total=4837,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:35.111 job3: (groupid=0, jobs=1): err= 0: pid=610367: Fri Dec 13 05:56:34 2024 00:40:35.111 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:40:35.111 slat (nsec): min=1877, max=13010k, avg=113693.41, stdev=813265.01 00:40:35.111 clat (usec): min=6265, max=38501, avg=15017.37, stdev=4596.58 00:40:35.111 lat (usec): min=6272, max=38526, avg=15131.07, stdev=4665.22 00:40:35.111 clat percentiles (usec): 00:40:35.111 | 1.00th=[ 6456], 5.00th=[10290], 10.00th=[11207], 20.00th=[12125], 00:40:35.111 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13829], 60.00th=[14091], 00:40:35.111 | 70.00th=[15401], 80.00th=[17171], 90.00th=[20055], 95.00th=[25560], 00:40:35.111 | 99.00th=[32900], 99.50th=[34866], 99.90th=[34866], 99.95th=[35390], 00:40:35.111 | 99.99th=[38536] 00:40:35.111 write: IOPS=4072, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:40:35.111 slat (usec): min=2, max=20791, avg=139.17, stdev=1024.63 00:40:35.111 clat (usec): min=493, max=83636, avg=17898.87, stdev=12677.54 00:40:35.111 lat (usec): min=4408, max=83640, avg=18038.03, stdev=12767.66 00:40:35.111 clat percentiles (usec): 00:40:35.111 | 1.00th=[ 8029], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[10683], 00:40:35.111 | 30.00th=[11076], 40.00th=[12518], 50.00th=[13173], 60.00th=[13566], 00:40:35.111 | 70.00th=[15664], 80.00th=[24511], 90.00th=[31589], 95.00th=[39584], 00:40:35.111 | 99.00th=[76022], 99.50th=[77071], 99.90th=[83362], 99.95th=[83362], 00:40:35.111 | 99.99th=[83362] 00:40:35.111 bw ( KiB/s): min=15096, max=16624, per=23.16%, avg=15860.00, stdev=1080.46, samples=2 00:40:35.111 iops : min= 3774, max= 4156, avg=3965.00, stdev=270.11, samples=2 00:40:35.111 lat (usec) : 500=0.01% 00:40:35.111 lat (msec) : 10=6.00%, 20=75.49%, 50=16.74%, 100=1.76% 00:40:35.111 cpu : usr=2.29%, sys=5.78%, ctx=244, majf=0, minf=1 00:40:35.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:40:35.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:35.112 issued rwts: total=3584,4093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:35.112 00:40:35.112 Run status group 0 (all jobs): 00:40:35.112 READ: bw=61.8MiB/s (64.8MB/s), 9.97MiB/s-20.8MiB/s (10.5MB/s-21.8MB/s), io=64.6MiB (67.7MB), run=1003-1045msec 00:40:35.112 WRITE: bw=66.9MiB/s (70.1MB/s), 11.8MiB/s-21.1MiB/s (12.4MB/s-22.1MB/s), io=69.9MiB (73.3MB), run=1003-1045msec 00:40:35.112 00:40:35.112 Disk stats (read/write): 00:40:35.112 nvme0n1: ios=4640/4615, merge=0/0, ticks=25125/26886, in_queue=52011, util=90.58% 00:40:35.112 nvme0n2: ios=2266/2560, merge=0/0, ticks=16738/24909, in_queue=41647, util=98.37% 00:40:35.112 nvme0n3: ios=4096/4295, merge=0/0, ticks=46935/56566, in_queue=103501, util=88.97% 00:40:35.112 nvme0n4: ios=3105/3430, merge=0/0, ticks=23228/30214, in_queue=53442, util=96.23% 00:40:35.112 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:35.112 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=610593 00:40:35.112 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:35.112 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:35.112 [global] 00:40:35.112 thread=1 00:40:35.112 invalidate=1 00:40:35.112 rw=read 00:40:35.112 time_based=1 00:40:35.112 runtime=10 00:40:35.112 ioengine=libaio 00:40:35.112 direct=1 00:40:35.112 bs=4096 00:40:35.112 iodepth=1 00:40:35.112 norandommap=1 00:40:35.112 numjobs=1 00:40:35.112 00:40:35.112 [job0] 00:40:35.112 filename=/dev/nvme0n1 00:40:35.112 [job1] 00:40:35.112 filename=/dev/nvme0n2 00:40:35.112 [job2] 00:40:35.112 filename=/dev/nvme0n3 00:40:35.112 [job3] 00:40:35.112 filename=/dev/nvme0n4 00:40:35.112 Could not set queue depth (nvme0n1) 00:40:35.112 Could not set queue depth (nvme0n2) 00:40:35.112 Could not set queue depth (nvme0n3) 00:40:35.112 Could not set queue depth (nvme0n4) 00:40:35.372 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:35.372 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:35.372 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:35.372 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:35.372 fio-3.35 00:40:35.372 Starting 4 threads 00:40:37.891 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:38.147 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=48492544, buflen=4096 00:40:38.147 fio: pid=610738, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:38.147 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:38.405 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:38.405 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=344064, buflen=4096 00:40:38.405 fio: pid=610737, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:38.405 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:38.405 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:38.405 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:38.662 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=327680, buflen=4096 00:40:38.662 fio: pid=610729, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:38.662 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:38.662 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:38.662 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=46276608, buflen=4096 00:40:38.662 fio: pid=610733, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:38.919 00:40:38.919 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=610729: Fri Dec 13 05:56:38 2024 00:40:38.919 read: IOPS=25, BW=103KiB/s (105kB/s)(320KiB/3116msec) 00:40:38.919 slat (usec): min=9, max=10919, avg=157.29, stdev=1210.72 00:40:38.919 clat (usec): min=210, max=45082, avg=38512.17, stdev=9941.56 00:40:38.919 lat (usec): min=232, max=45110, avg=38671.16, stdev=9473.14 00:40:38.919 clat percentiles (usec): 00:40:38.919 | 1.00th=[ 212], 5.00th=[ 314], 10.00th=[40633], 20.00th=[40633], 00:40:38.919 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:38.919 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:40:38.919 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:40:38.919 | 99.99th=[44827] 00:40:38.919 bw ( KiB/s): min= 96, max= 113, per=0.36%, avg=102.83, stdev= 6.34, samples=6 00:40:38.919 iops : min= 24, max= 28, avg=25.67, stdev= 1.51, samples=6 00:40:38.919 lat (usec) : 250=2.47%, 500=3.70% 00:40:38.919 lat (msec) : 50=92.59% 00:40:38.919 cpu : usr=0.13%, sys=0.00%, ctx=83, majf=0, minf=1 00:40:38.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:38.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.919 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.919 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:38.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:38.919 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=610733: Fri Dec 13 05:56:38 2024 00:40:38.919 read: IOPS=3391, BW=13.2MiB/s (13.9MB/s)(44.1MiB/3332msec) 00:40:38.919 slat (usec): min=3, max=7684, avg= 9.11, stdev=99.83 00:40:38.919 clat (usec): min=183, max=41163, avg=281.68, stdev=1381.11 00:40:38.919 lat (usec): min=191, max=41170, avg=290.79, stdev=1385.44 00:40:38.919 clat percentiles (usec): 00:40:38.919 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 221], 00:40:38.919 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:40:38.919 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 255], 00:40:38.919 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[40633], 99.95th=[41157], 00:40:38.919 | 99.99th=[41157] 00:40:38.919 bw ( KiB/s): min= 7840, max=17296, per=52.88%, avg=14793.33, stdev=3577.28, samples=6 00:40:38.919 iops : min= 1960, max= 4324, avg=3698.33, stdev=894.32, samples=6 00:40:38.919 lat (usec) : 250=83.40%, 500=16.44%, 750=0.01%, 1000=0.01% 00:40:38.919 lat (msec) : 2=0.03%, 50=0.12% 00:40:38.919 cpu : usr=2.25%, sys=4.89%, ctx=11303, majf=0, minf=2 00:40:38.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:38.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.919 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.919 issued rwts: total=11299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:38.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:38.919 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=610737: Fri Dec 13 05:56:38 2024 00:40:38.919 read: IOPS=29, BW=115KiB/s (118kB/s)(336KiB/2918msec) 00:40:38.919 slat (usec): min=7, max=13891, avg=182.80, stdev=1504.60 00:40:38.919 clat (usec): min=254, max=45032, avg=34287.95, stdev=15293.64 00:40:38.919 lat (usec): min=263, max=54965, avg=34472.65, stdev=15440.93 00:40:38.919 clat percentiles (usec): 00:40:38.919 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[40633], 00:40:38.919 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:38.919 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:40:38.919 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:40:38.919 | 99.99th=[44827] 00:40:38.919 bw ( KiB/s): min= 96, max= 200, per=0.42%, avg=118.40, stdev=45.75, samples=5 00:40:38.919 iops : min= 24, max= 50, avg=29.60, stdev=11.44, samples=5 00:40:38.919 lat (usec) : 500=14.12%, 750=2.35% 00:40:38.919 lat (msec) : 50=82.35% 00:40:38.919 cpu : usr=0.07%, sys=0.00%, ctx=87, majf=0, minf=2 00:40:38.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:38.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.919 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.919 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:38.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:38.919 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=610738: Fri Dec 13 05:56:38 2024 00:40:38.919 read: IOPS=4385, BW=17.1MiB/s (18.0MB/s)(46.2MiB/2700msec) 00:40:38.919 slat (nsec): min=6452, max=32275, avg=7495.55, stdev=912.76 00:40:38.919 clat (usec): min=181, max=515, avg=218.57, stdev= 9.14 00:40:38.919 lat (usec): min=188, max=548, avg=226.07, stdev= 9.26 00:40:38.919 clat percentiles (usec): 00:40:38.920 | 1.00th=[ 194], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 212], 00:40:38.920 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 221], 00:40:38.920 | 70.00th=[ 223], 80.00th=[ 225], 90.00th=[ 229], 95.00th=[ 233], 00:40:38.920 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 281], 99.95th=[ 293], 00:40:38.920 | 99.99th=[ 420] 00:40:38.920 bw ( KiB/s): min=17448, max=17848, per=62.98%, avg=17616.00, stdev=159.20, samples=5 00:40:38.920 iops : min= 4362, max= 4462, avg=4404.00, stdev=39.80, samples=5 00:40:38.920 lat (usec) : 250=99.44%, 500=0.54%, 750=0.01% 00:40:38.920 cpu : usr=0.67%, sys=4.63%, ctx=11840, majf=0, minf=2 00:40:38.920 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:38.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.920 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.920 issued rwts: total=11840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:38.920 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:38.920 00:40:38.920 Run status group 0 (all jobs): 00:40:38.920 READ: bw=27.3MiB/s (28.6MB/s), 103KiB/s-17.1MiB/s (105kB/s-18.0MB/s), io=91.0MiB (95.4MB), run=2700-3332msec 00:40:38.920 00:40:38.920 Disk stats (read/write): 00:40:38.920 nvme0n1: ios=81/0, merge=0/0, ticks=3092/0, in_queue=3092, util=95.35% 00:40:38.920 nvme0n2: ios=11293/0, merge=0/0, ticks=2851/0, in_queue=2851, util=95.67% 00:40:38.920 nvme0n3: ios=82/0, merge=0/0, ticks=2800/0, in_queue=2800, util=96.08% 00:40:38.920 nvme0n4: ios=11457/0, merge=0/0, ticks=2424/0, in_queue=2424, util=96.45% 00:40:38.920 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:38.920 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:39.177 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:39.177 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:39.433 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:39.433 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:39.690 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:39.690 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:39.690 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:39.690 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 610593 00:40:39.690 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:39.690 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:39.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:39.947 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:39.947 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:40:39.947 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:39.947 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:39.947 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:39.947 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:39.947 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:40:39.947 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:39.947 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:39.947 nvmf hotplug test: fio failed as expected 00:40:39.947 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:40.205 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:40.205 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:40.205 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:40.205 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:40.205 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:40.205 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:40.205 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:40.205 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:40.205 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:40.205 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:40.205 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:40.205 rmmod nvme_tcp 00:40:40.205 rmmod nvme_fabrics 00:40:40.205 rmmod nvme_keyring 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 607957 ']' 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 607957 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 607957 ']' 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 607957 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 607957 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 607957' 00:40:40.205 killing process with pid 607957 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 607957 00:40:40.205 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 607957 00:40:40.463 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:40.463 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:40.463 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:40.463 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:40.463 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:40:40.463 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:40.463 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:40:40.463 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:40.463 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:40.463 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:40.463 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:40.463 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:42.368 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:42.368 00:40:42.368 real 0m25.719s 00:40:42.368 user 1m31.890s 00:40:42.368 sys 0m10.860s 00:40:42.368 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:42.368 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:42.368 ************************************ 00:40:42.368 END TEST nvmf_fio_target 00:40:42.368 ************************************ 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:42.628 ************************************ 00:40:42.628 START TEST nvmf_bdevio 00:40:42.628 ************************************ 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:42.628 * Looking for test storage... 00:40:42.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:42.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.628 --rc genhtml_branch_coverage=1 00:40:42.628 --rc genhtml_function_coverage=1 00:40:42.628 --rc genhtml_legend=1 00:40:42.628 --rc geninfo_all_blocks=1 00:40:42.628 --rc geninfo_unexecuted_blocks=1 00:40:42.628 00:40:42.628 ' 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:42.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.628 --rc genhtml_branch_coverage=1 00:40:42.628 --rc genhtml_function_coverage=1 00:40:42.628 --rc genhtml_legend=1 00:40:42.628 --rc geninfo_all_blocks=1 00:40:42.628 --rc geninfo_unexecuted_blocks=1 00:40:42.628 00:40:42.628 ' 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:42.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.628 --rc genhtml_branch_coverage=1 00:40:42.628 --rc genhtml_function_coverage=1 00:40:42.628 --rc genhtml_legend=1 00:40:42.628 --rc geninfo_all_blocks=1 00:40:42.628 --rc geninfo_unexecuted_blocks=1 00:40:42.628 00:40:42.628 ' 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:42.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.628 --rc genhtml_branch_coverage=1 00:40:42.628 --rc genhtml_function_coverage=1 00:40:42.628 --rc genhtml_legend=1 00:40:42.628 --rc geninfo_all_blocks=1 00:40:42.628 --rc geninfo_unexecuted_blocks=1 00:40:42.628 00:40:42.628 ' 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:42.628 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:42.629 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:49.197 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:49.197 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:49.197 Found net devices under 0000:af:00.0: cvl_0_0 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:49.197 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:49.198 Found net devices under 0000:af:00.1: cvl_0_1 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:49.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:49.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:40:49.198 00:40:49.198 --- 10.0.0.2 ping statistics --- 00:40:49.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:49.198 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:49.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:49.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:40:49.198 00:40:49.198 --- 10.0.0.1 ping statistics --- 00:40:49.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:49.198 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=614894 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 614894 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 614894 ']' 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:49.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.198 [2024-12-13 05:56:48.572241] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:49.198 [2024-12-13 05:56:48.573210] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:49.198 [2024-12-13 05:56:48.573248] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:49.198 [2024-12-13 05:56:48.652975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:49.198 [2024-12-13 05:56:48.675712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:49.198 [2024-12-13 05:56:48.675744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:49.198 [2024-12-13 05:56:48.675751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:49.198 [2024-12-13 05:56:48.675757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:49.198 [2024-12-13 05:56:48.675762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:49.198 [2024-12-13 05:56:48.677051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:40:49.198 [2024-12-13 05:56:48.677158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:40:49.198 [2024-12-13 05:56:48.677264] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:49.198 [2024-12-13 05:56:48.677266] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:40:49.198 [2024-12-13 05:56:48.740078] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:49.198 [2024-12-13 05:56:48.741076] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:49.198 [2024-12-13 05:56:48.741318] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:49.198 [2024-12-13 05:56:48.741669] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:49.198 [2024-12-13 05:56:48.741713] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.198 [2024-12-13 05:56:48.817955] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.198 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.199 Malloc0 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.199 [2024-12-13 05:56:48.898200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:49.199 { 00:40:49.199 "params": { 00:40:49.199 "name": "Nvme$subsystem", 00:40:49.199 "trtype": "$TEST_TRANSPORT", 00:40:49.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:49.199 "adrfam": "ipv4", 00:40:49.199 "trsvcid": "$NVMF_PORT", 00:40:49.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:49.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:49.199 "hdgst": ${hdgst:-false}, 00:40:49.199 "ddgst": ${ddgst:-false} 00:40:49.199 }, 00:40:49.199 "method": "bdev_nvme_attach_controller" 00:40:49.199 } 00:40:49.199 EOF 00:40:49.199 )") 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:40:49.199 05:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:49.199 "params": { 00:40:49.199 "name": "Nvme1", 00:40:49.199 "trtype": "tcp", 00:40:49.199 "traddr": "10.0.0.2", 00:40:49.199 "adrfam": "ipv4", 00:40:49.199 "trsvcid": "4420", 00:40:49.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:49.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:49.199 "hdgst": false, 00:40:49.199 "ddgst": false 00:40:49.199 }, 00:40:49.199 "method": "bdev_nvme_attach_controller" 00:40:49.199 }' 00:40:49.199 [2024-12-13 05:56:48.947827] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:49.199 [2024-12-13 05:56:48.947874] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614926 ] 00:40:49.199 [2024-12-13 05:56:49.021761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:49.199 [2024-12-13 05:56:49.047118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:49.199 [2024-12-13 05:56:49.047223] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:49.199 [2024-12-13 05:56:49.047224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:49.199 I/O targets: 00:40:49.199 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:49.199 00:40:49.199 00:40:49.199 CUnit - A unit testing framework for C - Version 2.1-3 00:40:49.199 http://cunit.sourceforge.net/ 00:40:49.199 00:40:49.199 00:40:49.199 Suite: bdevio tests on: Nvme1n1 00:40:49.456 Test: blockdev write read block ...passed 00:40:49.456 Test: blockdev write zeroes read block ...passed 00:40:49.456 Test: blockdev write zeroes read no split ...passed 00:40:49.456 Test: blockdev write zeroes read split ...passed 00:40:49.456 Test: blockdev write zeroes read split partial ...passed 00:40:49.456 Test: blockdev reset ...[2024-12-13 05:56:49.302701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:49.456 [2024-12-13 05:56:49.302762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbc630 (9): Bad file descriptor 00:40:49.456 [2024-12-13 05:56:49.435296] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:49.456 passed 00:40:49.456 Test: blockdev write read 8 blocks ...passed 00:40:49.714 Test: blockdev write read size > 128k ...passed 00:40:49.714 Test: blockdev write read invalid size ...passed 00:40:49.714 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:49.714 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:49.714 Test: blockdev write read max offset ...passed 00:40:49.714 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:49.714 Test: blockdev writev readv 8 blocks ...passed 00:40:49.714 Test: blockdev writev readv 30 x 1block ...passed 00:40:49.714 Test: blockdev writev readv block ...passed 00:40:49.714 Test: blockdev writev readv size > 128k ...passed 00:40:49.714 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:49.714 Test: blockdev comparev and writev ...[2024-12-13 05:56:49.645346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.714 [2024-12-13 05:56:49.645376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.714 [2024-12-13 05:56:49.645391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.714 [2024-12-13 05:56:49.645399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:49.714 [2024-12-13 05:56:49.645688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.714 [2024-12-13 05:56:49.645699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:49.714 [2024-12-13 05:56:49.645710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.714 [2024-12-13 05:56:49.645717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:49.714 [2024-12-13 05:56:49.645994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.714 [2024-12-13 05:56:49.646005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:49.714 [2024-12-13 05:56:49.646016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.714 [2024-12-13 05:56:49.646023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:49.714 [2024-12-13 05:56:49.646302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.714 [2024-12-13 05:56:49.646312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:49.714 [2024-12-13 05:56:49.646323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.714 [2024-12-13 05:56:49.646330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:49.714 passed 00:40:49.714 Test: blockdev nvme passthru rw ...passed 00:40:49.714 Test: blockdev nvme passthru vendor specific ...[2024-12-13 05:56:49.728822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:49.714 [2024-12-13 05:56:49.728837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:49.714 [2024-12-13 05:56:49.728948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:49.714 [2024-12-13 05:56:49.728957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:49.714 [2024-12-13 05:56:49.729062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:49.714 [2024-12-13 05:56:49.729071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:49.714 [2024-12-13 05:56:49.729176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:49.714 [2024-12-13 05:56:49.729186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:49.714 passed 00:40:49.972 Test: blockdev nvme admin passthru ...passed 00:40:49.972 Test: blockdev copy ...passed 00:40:49.972 00:40:49.972 Run Summary: Type Total Ran Passed Failed Inactive 00:40:49.972 suites 1 1 n/a 0 0 00:40:49.972 tests 23 23 23 0 0 00:40:49.972 asserts 152 152 152 0 n/a 00:40:49.972 00:40:49.972 Elapsed time = 1.187 seconds 00:40:49.972 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:49.972 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.972 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.972 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.972 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:49.972 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:49.972 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:49.972 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:49.972 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:49.972 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:49.972 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:49.972 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:49.972 rmmod nvme_tcp 00:40:49.972 rmmod nvme_fabrics 00:40:49.972 rmmod nvme_keyring 00:40:49.972 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:50.231 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:50.231 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:50.231 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 614894 ']' 00:40:50.231 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 614894 00:40:50.231 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 614894 ']' 00:40:50.231 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 614894 00:40:50.231 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:40:50.231 05:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 614894 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 614894' 00:40:50.231 killing process with pid 614894 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 614894 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 614894 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:50.231 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:52.766 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:52.766 00:40:52.766 real 0m9.899s 00:40:52.766 user 0m8.400s 00:40:52.766 sys 0m5.115s 00:40:52.766 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:52.766 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:52.766 ************************************ 00:40:52.766 END TEST nvmf_bdevio 00:40:52.766 ************************************ 00:40:52.766 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:52.766 00:40:52.766 real 4m30.313s 00:40:52.766 user 9m3.585s 00:40:52.766 sys 1m49.411s 00:40:52.766 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:52.766 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:52.766 ************************************ 00:40:52.766 END TEST nvmf_target_core_interrupt_mode 00:40:52.766 ************************************ 00:40:52.766 05:56:52 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:52.766 05:56:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:52.766 05:56:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:52.766 05:56:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:52.766 ************************************ 00:40:52.766 START TEST nvmf_interrupt 00:40:52.766 ************************************ 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:52.766 * Looking for test storage... 00:40:52.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:52.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.766 --rc genhtml_branch_coverage=1 00:40:52.766 --rc genhtml_function_coverage=1 00:40:52.766 --rc genhtml_legend=1 00:40:52.766 --rc geninfo_all_blocks=1 00:40:52.766 --rc geninfo_unexecuted_blocks=1 00:40:52.766 00:40:52.766 ' 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:52.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.766 --rc genhtml_branch_coverage=1 00:40:52.766 --rc genhtml_function_coverage=1 00:40:52.766 --rc genhtml_legend=1 00:40:52.766 --rc geninfo_all_blocks=1 00:40:52.766 --rc geninfo_unexecuted_blocks=1 00:40:52.766 00:40:52.766 ' 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:52.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.766 --rc genhtml_branch_coverage=1 00:40:52.766 --rc genhtml_function_coverage=1 00:40:52.766 --rc genhtml_legend=1 00:40:52.766 --rc geninfo_all_blocks=1 00:40:52.766 --rc geninfo_unexecuted_blocks=1 00:40:52.766 00:40:52.766 ' 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:52.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.766 --rc genhtml_branch_coverage=1 00:40:52.766 --rc genhtml_function_coverage=1 00:40:52.766 --rc genhtml_legend=1 00:40:52.766 --rc geninfo_all_blocks=1 00:40:52.766 --rc geninfo_unexecuted_blocks=1 00:40:52.766 00:40:52.766 ' 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:52.766 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:52.767 05:56:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:59.334 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:59.334 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:59.334 Found net devices under 0000:af:00.0: cvl_0_0 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:59.334 Found net devices under 0000:af:00.1: cvl_0_1 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:59.334 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:59.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:59.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:40:59.335 00:40:59.335 --- 10.0.0.2 ping statistics --- 00:40:59.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:59.335 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:59.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:59.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:40:59.335 00:40:59.335 --- 10.0.0.1 ping statistics --- 00:40:59.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:59.335 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=618617 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 618617 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 618617 ']' 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:59.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:59.335 [2024-12-13 05:56:58.555173] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:59.335 [2024-12-13 05:56:58.556123] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:59.335 [2024-12-13 05:56:58.556162] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:59.335 [2024-12-13 05:56:58.636750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:59.335 [2024-12-13 05:56:58.659187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:59.335 [2024-12-13 05:56:58.659221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:59.335 [2024-12-13 05:56:58.659228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:59.335 [2024-12-13 05:56:58.659233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:59.335 [2024-12-13 05:56:58.659238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:59.335 [2024-12-13 05:56:58.660361] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:59.335 [2024-12-13 05:56:58.660363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:59.335 [2024-12-13 05:56:58.722726] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:59.335 [2024-12-13 05:56:58.723354] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:59.335 [2024-12-13 05:56:58.723562] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:59.335 5000+0 records in 00:40:59.335 5000+0 records out 00:40:59.335 10240000 bytes (10 MB, 9.8 MiB) copied, 0.017369 s, 590 MB/s 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:59.335 AIO0 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:59.335 [2024-12-13 05:56:58.857173] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:59.335 [2024-12-13 05:56:58.897496] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 618617 0 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618617 0 idle 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618617 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618617 -w 256 00:40:59.335 05:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:59.335 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618617 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0' 00:40:59.335 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618617 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0 00:40:59.335 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 618617 1 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618617 1 idle 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618617 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618617 -w 256 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618624 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618624 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=618671 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 618617 0 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 618617 0 busy 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618617 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618617 -w 256 00:40:59.336 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618617 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.42 reactor_0' 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618617 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.42 reactor_0 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 618617 1 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 618617 1 busy 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618617 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618617 -w 256 00:40:59.594 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:59.851 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618624 root 20 0 128.2g 46848 33792 R 93.8 0.1 0:00.28 reactor_1' 00:40:59.851 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618624 root 20 0 128.2g 46848 33792 R 93.8 0.1 0:00.28 reactor_1 00:40:59.851 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:59.851 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:59.851 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:40:59.851 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:40:59.851 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:59.851 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:59.851 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:59.851 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:59.851 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 618671 00:41:09.812 Initializing NVMe Controllers 00:41:09.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:09.812 Controller IO queue size 256, less than required. 00:41:09.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:09.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:09.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:09.812 Initialization complete. Launching workers. 00:41:09.812 ======================================================== 00:41:09.812 Latency(us) 00:41:09.812 Device Information : IOPS MiB/s Average min max 00:41:09.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16983.87 66.34 15079.84 3807.03 28552.67 00:41:09.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16766.87 65.50 15272.27 7547.43 26650.74 00:41:09.812 ======================================================== 00:41:09.812 Total : 33750.74 131.84 15175.44 3807.03 28552.67 00:41:09.812 00:41:09.812 05:57:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:09.812 05:57:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 618617 0 00:41:09.812 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618617 0 idle 00:41:09.812 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618617 00:41:09.812 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618617 -w 256 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618617 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:20.21 reactor_0' 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618617 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:20.21 reactor_0 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 618617 1 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618617 1 idle 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618617 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618617 -w 256 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618624 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1' 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618624 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:09.813 05:57:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:10.380 05:57:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:10.380 05:57:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:10.380 05:57:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:10.380 05:57:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:10.380 05:57:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 618617 0 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618617 0 idle 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618617 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618617 -w 256 00:41:12.284 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618617 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.43 reactor_0' 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618617 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.43 reactor_0 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 618617 1 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618617 1 idle 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618617 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618617 -w 256 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618624 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.08 reactor_1' 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618624 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.08 reactor_1 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:12.544 05:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:12.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:12.803 rmmod nvme_tcp 00:41:12.803 rmmod nvme_fabrics 00:41:12.803 rmmod nvme_keyring 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 618617 ']' 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 618617 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 618617 ']' 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 618617 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:12.803 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 618617 00:41:13.062 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:13.062 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:13.062 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 618617' 00:41:13.062 killing process with pid 618617 00:41:13.062 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 618617 00:41:13.062 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 618617 00:41:13.062 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:13.062 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:13.062 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:13.062 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:13.062 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:13.062 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:13.062 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:13.062 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:13.062 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:13.062 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:13.062 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:13.062 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:15.594 05:57:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:15.594 00:41:15.594 real 0m22.657s 00:41:15.594 user 0m39.550s 00:41:15.594 sys 0m8.319s 00:41:15.594 05:57:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:15.594 05:57:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:15.594 ************************************ 00:41:15.594 END TEST nvmf_interrupt 00:41:15.594 ************************************ 00:41:15.594 00:41:15.594 real 35m24.447s 00:41:15.594 user 86m15.886s 00:41:15.594 sys 10m14.415s 00:41:15.594 05:57:15 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:15.594 05:57:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:15.594 ************************************ 00:41:15.594 END TEST nvmf_tcp 00:41:15.594 ************************************ 00:41:15.594 05:57:15 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:15.594 05:57:15 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:15.594 05:57:15 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:15.594 05:57:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:15.594 05:57:15 -- common/autotest_common.sh@10 -- # set +x 00:41:15.594 ************************************ 00:41:15.594 START TEST spdkcli_nvmf_tcp 00:41:15.594 ************************************ 00:41:15.594 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:15.594 * Looking for test storage... 00:41:15.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:15.594 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:15.594 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:15.594 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:15.594 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:15.594 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:15.594 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:15.594 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:15.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:15.595 --rc genhtml_branch_coverage=1 00:41:15.595 --rc genhtml_function_coverage=1 00:41:15.595 --rc genhtml_legend=1 00:41:15.595 --rc geninfo_all_blocks=1 00:41:15.595 --rc geninfo_unexecuted_blocks=1 00:41:15.595 00:41:15.595 ' 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:15.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:15.595 --rc genhtml_branch_coverage=1 00:41:15.595 --rc genhtml_function_coverage=1 00:41:15.595 --rc genhtml_legend=1 00:41:15.595 --rc geninfo_all_blocks=1 00:41:15.595 --rc geninfo_unexecuted_blocks=1 00:41:15.595 00:41:15.595 ' 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:15.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:15.595 --rc genhtml_branch_coverage=1 00:41:15.595 --rc genhtml_function_coverage=1 00:41:15.595 --rc genhtml_legend=1 00:41:15.595 --rc geninfo_all_blocks=1 00:41:15.595 --rc geninfo_unexecuted_blocks=1 00:41:15.595 00:41:15.595 ' 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:15.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:15.595 --rc genhtml_branch_coverage=1 00:41:15.595 --rc genhtml_function_coverage=1 00:41:15.595 --rc genhtml_legend=1 00:41:15.595 --rc geninfo_all_blocks=1 00:41:15.595 --rc geninfo_unexecuted_blocks=1 00:41:15.595 00:41:15.595 ' 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:15.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=621858 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 621858 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 621858 ']' 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:15.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:15.595 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:15.595 [2024-12-13 05:57:15.459724] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:15.595 [2024-12-13 05:57:15.459770] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621858 ] 00:41:15.595 [2024-12-13 05:57:15.531962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:15.595 [2024-12-13 05:57:15.556288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:15.595 [2024-12-13 05:57:15.556290] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.853 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:15.853 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:15.853 05:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:15.853 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:15.853 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:15.853 05:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:15.853 05:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:15.853 05:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:15.853 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:15.853 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:15.853 05:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:15.853 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:15.853 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:15.853 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:15.853 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:15.853 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:15.853 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:15.853 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:15.853 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:15.853 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:15.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:15.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:15.854 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:15.854 ' 00:41:19.133 [2024-12-13 05:57:18.413347] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:20.066 [2024-12-13 05:57:19.753651] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:22.593 [2024-12-13 05:57:22.225364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:24.491 [2024-12-13 05:57:24.380079] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:26.389 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:26.389 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:26.389 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:26.389 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:26.389 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:26.389 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:26.389 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:26.389 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:26.389 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:26.389 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:26.389 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:26.389 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:26.389 05:57:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:26.389 05:57:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:26.389 05:57:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:26.389 05:57:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:26.389 05:57:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:26.389 05:57:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:26.389 05:57:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:26.389 05:57:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:26.647 05:57:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:26.647 05:57:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:26.647 05:57:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:26.647 05:57:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:26.647 05:57:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:26.647 05:57:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:26.647 05:57:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:26.647 05:57:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:26.647 05:57:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:26.647 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:26.647 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:26.647 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:26.647 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:26.647 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:26.647 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:26.647 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:26.647 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:26.647 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:26.647 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:26.647 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:26.647 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:26.647 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:26.647 ' 00:41:33.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:33.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:33.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:33.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:33.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:33.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:33.203 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:33.203 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:33.203 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:33.203 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:33.203 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:33.203 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:33.203 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:33.203 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 621858 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 621858 ']' 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 621858 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 621858 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 621858' 00:41:33.203 killing process with pid 621858 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 621858 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 621858 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 621858 ']' 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 621858 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 621858 ']' 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 621858 00:41:33.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (621858) - No such process 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 621858 is not found' 00:41:33.203 Process with pid 621858 is not found 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:33.203 00:41:33.203 real 0m17.318s 00:41:33.203 user 0m38.190s 00:41:33.203 sys 0m0.837s 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:33.203 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:33.203 ************************************ 00:41:33.203 END TEST spdkcli_nvmf_tcp 00:41:33.203 ************************************ 00:41:33.203 05:57:32 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:33.203 05:57:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:33.203 05:57:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:33.203 05:57:32 -- common/autotest_common.sh@10 -- # set +x 00:41:33.203 ************************************ 00:41:33.203 START TEST nvmf_identify_passthru 00:41:33.203 ************************************ 00:41:33.203 05:57:32 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:33.203 * Looking for test storage... 00:41:33.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:33.203 05:57:32 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:33.203 05:57:32 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:41:33.203 05:57:32 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:33.203 05:57:32 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:33.203 05:57:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:41:33.203 05:57:32 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:33.203 05:57:32 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:33.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.203 --rc genhtml_branch_coverage=1 00:41:33.203 --rc genhtml_function_coverage=1 00:41:33.203 --rc genhtml_legend=1 00:41:33.203 --rc geninfo_all_blocks=1 00:41:33.203 --rc geninfo_unexecuted_blocks=1 00:41:33.203 00:41:33.203 ' 00:41:33.203 05:57:32 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:33.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.203 --rc genhtml_branch_coverage=1 00:41:33.203 --rc genhtml_function_coverage=1 00:41:33.203 --rc genhtml_legend=1 00:41:33.203 --rc geninfo_all_blocks=1 00:41:33.203 --rc geninfo_unexecuted_blocks=1 00:41:33.203 00:41:33.203 ' 00:41:33.203 05:57:32 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:33.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.203 --rc genhtml_branch_coverage=1 00:41:33.203 --rc genhtml_function_coverage=1 00:41:33.203 --rc genhtml_legend=1 00:41:33.203 --rc geninfo_all_blocks=1 00:41:33.203 --rc geninfo_unexecuted_blocks=1 00:41:33.204 00:41:33.204 ' 00:41:33.204 05:57:32 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:33.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.204 --rc genhtml_branch_coverage=1 00:41:33.204 --rc genhtml_function_coverage=1 00:41:33.204 --rc genhtml_legend=1 00:41:33.204 --rc geninfo_all_blocks=1 00:41:33.204 --rc geninfo_unexecuted_blocks=1 00:41:33.204 00:41:33.204 ' 00:41:33.204 05:57:32 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:33.204 05:57:32 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:33.204 05:57:32 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:33.204 05:57:32 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:33.204 05:57:32 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:33.204 05:57:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.204 05:57:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.204 05:57:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.204 05:57:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:33.204 05:57:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:33.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:33.204 05:57:32 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:33.204 05:57:32 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:33.204 05:57:32 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:33.204 05:57:32 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:33.204 05:57:32 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:33.204 05:57:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.204 05:57:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.204 05:57:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.204 05:57:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:33.204 05:57:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.204 05:57:32 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:33.204 05:57:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:33.204 05:57:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:33.204 05:57:32 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:41:33.204 05:57:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:38.477 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:38.477 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:38.477 Found net devices under 0000:af:00.0: cvl_0_0 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:38.477 Found net devices under 0000:af:00.1: cvl_0_1 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:38.477 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:38.478 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:38.478 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:38.478 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:38.478 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:38.478 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:38.478 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:38.478 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:38.478 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:38.478 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:38.478 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:38.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:38.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:41:38.737 00:41:38.737 --- 10.0.0.2 ping statistics --- 00:41:38.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:38.737 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:38.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:38.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:41:38.737 00:41:38.737 --- 10.0.0.1 ping statistics --- 00:41:38.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:38.737 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:38.737 05:57:38 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:38.737 05:57:38 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:38.737 05:57:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:41:38.737 05:57:38 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:41:38.737 05:57:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:41:38.737 05:57:38 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:41:38.737 05:57:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:38.737 05:57:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:38.737 05:57:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:42.923 05:57:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:41:42.923 05:57:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:42.923 05:57:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:41:42.923 05:57:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:47.107 05:57:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:41:47.107 05:57:47 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:47.107 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:47.107 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:47.107 05:57:47 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:47.107 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:47.107 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:47.107 05:57:47 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=628914 00:41:47.107 05:57:47 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:47.107 05:57:47 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:47.107 05:57:47 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 628914 00:41:47.107 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 628914 ']' 00:41:47.107 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:47.107 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:47.107 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:47.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:47.107 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:47.107 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:47.366 [2024-12-13 05:57:47.163035] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:47.366 [2024-12-13 05:57:47.163078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:47.366 [2024-12-13 05:57:47.221250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:47.366 [2024-12-13 05:57:47.244382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:47.366 [2024-12-13 05:57:47.244421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:47.366 [2024-12-13 05:57:47.244429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:47.366 [2024-12-13 05:57:47.244434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:47.366 [2024-12-13 05:57:47.244439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:47.366 [2024-12-13 05:57:47.245749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:47.366 [2024-12-13 05:57:47.245862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:47.366 [2024-12-13 05:57:47.245984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:47.366 [2024-12-13 05:57:47.245986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:47.366 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:47.366 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:41:47.366 05:57:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:47.366 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.366 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:47.366 INFO: Log level set to 20 00:41:47.366 INFO: Requests: 00:41:47.366 { 00:41:47.366 "jsonrpc": "2.0", 00:41:47.366 "method": "nvmf_set_config", 00:41:47.366 "id": 1, 00:41:47.366 "params": { 00:41:47.366 "admin_cmd_passthru": { 00:41:47.366 "identify_ctrlr": true 00:41:47.366 } 00:41:47.366 } 00:41:47.366 } 00:41:47.366 00:41:47.366 INFO: response: 00:41:47.366 { 00:41:47.366 "jsonrpc": "2.0", 00:41:47.366 "id": 1, 00:41:47.366 "result": true 00:41:47.366 } 00:41:47.366 00:41:47.366 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.366 05:57:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:47.366 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.366 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:47.366 INFO: Setting log level to 20 00:41:47.366 INFO: Setting log level to 20 00:41:47.366 INFO: Log level set to 20 00:41:47.366 INFO: Log level set to 20 00:41:47.366 INFO: Requests: 00:41:47.366 { 00:41:47.366 "jsonrpc": "2.0", 00:41:47.366 "method": "framework_start_init", 00:41:47.366 "id": 1 00:41:47.366 } 00:41:47.366 00:41:47.366 INFO: Requests: 00:41:47.366 { 00:41:47.366 "jsonrpc": "2.0", 00:41:47.366 "method": "framework_start_init", 00:41:47.366 "id": 1 00:41:47.366 } 00:41:47.366 00:41:47.624 [2024-12-13 05:57:47.406490] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:47.624 INFO: response: 00:41:47.624 { 00:41:47.624 "jsonrpc": "2.0", 00:41:47.624 "id": 1, 00:41:47.624 "result": true 00:41:47.624 } 00:41:47.624 00:41:47.624 INFO: response: 00:41:47.624 { 00:41:47.624 "jsonrpc": "2.0", 00:41:47.624 "id": 1, 00:41:47.624 "result": true 00:41:47.624 } 00:41:47.624 00:41:47.624 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.624 05:57:47 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:47.624 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.624 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:47.625 INFO: Setting log level to 40 00:41:47.625 INFO: Setting log level to 40 00:41:47.625 INFO: Setting log level to 40 00:41:47.625 [2024-12-13 05:57:47.419792] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:47.625 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.625 05:57:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:47.625 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:47.625 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:47.625 05:57:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:41:47.625 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.625 05:57:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:50.906 Nvme0n1 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:50.906 [2024-12-13 05:57:50.332828] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:50.906 [ 00:41:50.906 { 00:41:50.906 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:50.906 "subtype": "Discovery", 00:41:50.906 "listen_addresses": [], 00:41:50.906 "allow_any_host": true, 00:41:50.906 "hosts": [] 00:41:50.906 }, 00:41:50.906 { 00:41:50.906 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:50.906 "subtype": "NVMe", 00:41:50.906 "listen_addresses": [ 00:41:50.906 { 00:41:50.906 "trtype": "TCP", 00:41:50.906 "adrfam": "IPv4", 00:41:50.906 "traddr": "10.0.0.2", 00:41:50.906 "trsvcid": "4420" 00:41:50.906 } 00:41:50.906 ], 00:41:50.906 "allow_any_host": true, 00:41:50.906 "hosts": [], 00:41:50.906 "serial_number": "SPDK00000000000001", 00:41:50.906 "model_number": "SPDK bdev Controller", 00:41:50.906 "max_namespaces": 1, 00:41:50.906 "min_cntlid": 1, 00:41:50.906 "max_cntlid": 65519, 00:41:50.906 "namespaces": [ 00:41:50.906 { 00:41:50.906 "nsid": 1, 00:41:50.906 "bdev_name": "Nvme0n1", 00:41:50.906 "name": "Nvme0n1", 00:41:50.906 "nguid": "7608F1D2EF754509A26FEF40440D69B5", 00:41:50.906 "uuid": "7608f1d2-ef75-4509-a26f-ef40440d69b5" 00:41:50.906 } 00:41:50.906 ] 00:41:50.906 } 00:41:50.906 ] 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:50.906 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.906 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:50.907 05:57:50 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:50.907 05:57:50 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:50.907 05:57:50 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:50.907 05:57:50 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:50.907 05:57:50 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:50.907 05:57:50 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:50.907 05:57:50 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:50.907 rmmod nvme_tcp 00:41:50.907 rmmod nvme_fabrics 00:41:50.907 rmmod nvme_keyring 00:41:50.907 05:57:50 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:50.907 05:57:50 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:50.907 05:57:50 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:50.907 05:57:50 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 628914 ']' 00:41:50.907 05:57:50 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 628914 00:41:50.907 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 628914 ']' 00:41:50.907 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 628914 00:41:50.907 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:41:50.907 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:50.907 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 628914 00:41:51.164 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:51.164 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:51.164 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 628914' 00:41:51.164 killing process with pid 628914 00:41:51.164 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 628914 00:41:51.164 05:57:50 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 628914 00:41:52.537 05:57:52 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:52.537 05:57:52 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:52.537 05:57:52 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:52.537 05:57:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:52.537 05:57:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:41:52.537 05:57:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:52.537 05:57:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:41:52.537 05:57:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:52.537 05:57:52 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:52.537 05:57:52 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:52.537 05:57:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:52.537 05:57:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:54.441 05:57:54 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:54.700 00:41:54.700 real 0m21.869s 00:41:54.700 user 0m28.073s 00:41:54.700 sys 0m5.247s 00:41:54.700 05:57:54 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:54.700 05:57:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:54.700 ************************************ 00:41:54.700 END TEST nvmf_identify_passthru 00:41:54.700 ************************************ 00:41:54.700 05:57:54 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:54.700 05:57:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:54.700 05:57:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:54.700 05:57:54 -- common/autotest_common.sh@10 -- # set +x 00:41:54.700 ************************************ 00:41:54.700 START TEST nvmf_dif 00:41:54.700 ************************************ 00:41:54.700 05:57:54 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:54.700 * Looking for test storage... 00:41:54.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:54.700 05:57:54 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:54.700 05:57:54 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:41:54.700 05:57:54 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:54.700 05:57:54 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:54.700 05:57:54 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:54.700 05:57:54 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:54.700 05:57:54 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:54.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.700 --rc genhtml_branch_coverage=1 00:41:54.700 --rc genhtml_function_coverage=1 00:41:54.700 --rc genhtml_legend=1 00:41:54.701 --rc geninfo_all_blocks=1 00:41:54.701 --rc geninfo_unexecuted_blocks=1 00:41:54.701 00:41:54.701 ' 00:41:54.701 05:57:54 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:54.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.701 --rc genhtml_branch_coverage=1 00:41:54.701 --rc genhtml_function_coverage=1 00:41:54.701 --rc genhtml_legend=1 00:41:54.701 --rc geninfo_all_blocks=1 00:41:54.701 --rc geninfo_unexecuted_blocks=1 00:41:54.701 00:41:54.701 ' 00:41:54.701 05:57:54 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:54.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.701 --rc genhtml_branch_coverage=1 00:41:54.701 --rc genhtml_function_coverage=1 00:41:54.701 --rc genhtml_legend=1 00:41:54.701 --rc geninfo_all_blocks=1 00:41:54.701 --rc geninfo_unexecuted_blocks=1 00:41:54.701 00:41:54.701 ' 00:41:54.701 05:57:54 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:54.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.701 --rc genhtml_branch_coverage=1 00:41:54.701 --rc genhtml_function_coverage=1 00:41:54.701 --rc genhtml_legend=1 00:41:54.701 --rc geninfo_all_blocks=1 00:41:54.701 --rc geninfo_unexecuted_blocks=1 00:41:54.701 00:41:54.701 ' 00:41:54.701 05:57:54 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:54.960 05:57:54 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:54.960 05:57:54 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:54.960 05:57:54 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:54.960 05:57:54 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:54.960 05:57:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.960 05:57:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.960 05:57:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.960 05:57:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:54.960 05:57:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:54.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:54.960 05:57:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:54.960 05:57:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:54.960 05:57:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:54.960 05:57:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:54.960 05:57:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:54.960 05:57:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:54.960 05:57:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:54.960 05:57:54 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:54.960 05:57:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:00.344 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:00.344 05:58:00 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:00.345 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:00.345 Found net devices under 0000:af:00.0: cvl_0_0 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:00.345 Found net devices under 0000:af:00.1: cvl_0_1 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:00.345 05:58:00 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:00.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:00.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:42:00.703 00:42:00.703 --- 10.0.0.2 ping statistics --- 00:42:00.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:00.703 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:00.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:00.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:42:00.703 00:42:00.703 --- 10.0.0.1 ping statistics --- 00:42:00.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:00.703 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:00.703 05:58:00 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:03.252 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:03.252 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:42:03.252 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:42:03.510 05:58:03 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:03.510 05:58:03 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:03.510 05:58:03 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:03.510 05:58:03 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:03.510 05:58:03 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:03.510 05:58:03 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:03.510 05:58:03 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:03.510 05:58:03 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:03.510 05:58:03 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:03.510 05:58:03 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:03.510 05:58:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:03.510 05:58:03 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=634373 00:42:03.510 05:58:03 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:03.510 05:58:03 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 634373 00:42:03.510 05:58:03 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 634373 ']' 00:42:03.510 05:58:03 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:03.510 05:58:03 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:03.510 05:58:03 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:03.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:03.510 05:58:03 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:03.510 05:58:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:03.510 [2024-12-13 05:58:03.470741] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:42:03.510 [2024-12-13 05:58:03.470786] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:03.769 [2024-12-13 05:58:03.549152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:03.769 [2024-12-13 05:58:03.571137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:03.769 [2024-12-13 05:58:03.571174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:03.769 [2024-12-13 05:58:03.571182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:03.769 [2024-12-13 05:58:03.571192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:03.769 [2024-12-13 05:58:03.571197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:03.769 [2024-12-13 05:58:03.571687] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:03.769 05:58:03 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:03.769 05:58:03 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:03.769 05:58:03 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:03.769 05:58:03 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:03.769 05:58:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:03.769 05:58:03 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:03.769 05:58:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:03.769 05:58:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:03.769 05:58:03 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.769 05:58:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:03.769 [2024-12-13 05:58:03.710880] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:03.769 05:58:03 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.769 05:58:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:03.769 05:58:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:03.769 05:58:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:03.769 05:58:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:03.769 ************************************ 00:42:03.769 START TEST fio_dif_1_default 00:42:03.769 ************************************ 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:03.769 bdev_null0 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.769 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:04.027 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:04.027 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:04.028 [2024-12-13 05:58:03.787248] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:04.028 { 00:42:04.028 "params": { 00:42:04.028 "name": "Nvme$subsystem", 00:42:04.028 "trtype": "$TEST_TRANSPORT", 00:42:04.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:04.028 "adrfam": "ipv4", 00:42:04.028 "trsvcid": "$NVMF_PORT", 00:42:04.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:04.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:04.028 "hdgst": ${hdgst:-false}, 00:42:04.028 "ddgst": ${ddgst:-false} 00:42:04.028 }, 00:42:04.028 "method": "bdev_nvme_attach_controller" 00:42:04.028 } 00:42:04.028 EOF 00:42:04.028 )") 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:04.028 "params": { 00:42:04.028 "name": "Nvme0", 00:42:04.028 "trtype": "tcp", 00:42:04.028 "traddr": "10.0.0.2", 00:42:04.028 "adrfam": "ipv4", 00:42:04.028 "trsvcid": "4420", 00:42:04.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:04.028 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:04.028 "hdgst": false, 00:42:04.028 "ddgst": false 00:42:04.028 }, 00:42:04.028 "method": "bdev_nvme_attach_controller" 00:42:04.028 }' 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:04.028 05:58:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:04.286 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:04.286 fio-3.35 00:42:04.286 Starting 1 thread 00:42:16.479 00:42:16.479 filename0: (groupid=0, jobs=1): err= 0: pid=634653: Fri Dec 13 05:58:14 2024 00:42:16.479 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:42:16.480 slat (nsec): min=5587, max=27001, avg=6230.05, stdev=912.82 00:42:16.480 clat (usec): min=40817, max=46291, avg=41021.30, stdev=363.05 00:42:16.480 lat (usec): min=40823, max=46318, avg=41027.53, stdev=363.54 00:42:16.480 clat percentiles (usec): 00:42:16.480 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:16.480 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:16.480 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:16.480 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:42:16.480 | 99.99th=[46400] 00:42:16.480 bw ( KiB/s): min= 384, max= 416, per=99.52%, avg=388.80, stdev=11.72, samples=20 00:42:16.480 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:42:16.480 lat (msec) : 50=100.00% 00:42:16.480 cpu : usr=92.23%, sys=7.52%, ctx=12, majf=0, minf=0 00:42:16.480 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:16.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.480 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.480 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:16.480 00:42:16.480 Run status group 0 (all jobs): 00:42:16.480 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10014-10014msec 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.480 00:42:16.480 real 0m11.172s 00:42:16.480 user 0m15.673s 00:42:16.480 sys 0m1.041s 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:16.480 ************************************ 00:42:16.480 END TEST fio_dif_1_default 00:42:16.480 ************************************ 00:42:16.480 05:58:14 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:16.480 05:58:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:16.480 05:58:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:16.480 05:58:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:16.480 ************************************ 00:42:16.480 START TEST fio_dif_1_multi_subsystems 00:42:16.480 ************************************ 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.480 05:58:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:16.480 bdev_null0 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:16.480 [2024-12-13 05:58:15.030118] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:16.480 bdev_null1 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:16.480 { 00:42:16.480 "params": { 00:42:16.480 "name": "Nvme$subsystem", 00:42:16.480 "trtype": "$TEST_TRANSPORT", 00:42:16.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:16.480 "adrfam": "ipv4", 00:42:16.480 "trsvcid": "$NVMF_PORT", 00:42:16.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:16.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:16.480 "hdgst": ${hdgst:-false}, 00:42:16.480 "ddgst": ${ddgst:-false} 00:42:16.480 }, 00:42:16.480 "method": "bdev_nvme_attach_controller" 00:42:16.480 } 00:42:16.480 EOF 00:42:16.480 )") 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:16.480 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:16.481 { 00:42:16.481 "params": { 00:42:16.481 "name": "Nvme$subsystem", 00:42:16.481 "trtype": "$TEST_TRANSPORT", 00:42:16.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:16.481 "adrfam": "ipv4", 00:42:16.481 "trsvcid": "$NVMF_PORT", 00:42:16.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:16.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:16.481 "hdgst": ${hdgst:-false}, 00:42:16.481 "ddgst": ${ddgst:-false} 00:42:16.481 }, 00:42:16.481 "method": "bdev_nvme_attach_controller" 00:42:16.481 } 00:42:16.481 EOF 00:42:16.481 )") 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:16.481 "params": { 00:42:16.481 "name": "Nvme0", 00:42:16.481 "trtype": "tcp", 00:42:16.481 "traddr": "10.0.0.2", 00:42:16.481 "adrfam": "ipv4", 00:42:16.481 "trsvcid": "4420", 00:42:16.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:16.481 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:16.481 "hdgst": false, 00:42:16.481 "ddgst": false 00:42:16.481 }, 00:42:16.481 "method": "bdev_nvme_attach_controller" 00:42:16.481 },{ 00:42:16.481 "params": { 00:42:16.481 "name": "Nvme1", 00:42:16.481 "trtype": "tcp", 00:42:16.481 "traddr": "10.0.0.2", 00:42:16.481 "adrfam": "ipv4", 00:42:16.481 "trsvcid": "4420", 00:42:16.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:16.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:16.481 "hdgst": false, 00:42:16.481 "ddgst": false 00:42:16.481 }, 00:42:16.481 "method": "bdev_nvme_attach_controller" 00:42:16.481 }' 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:16.481 05:58:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:16.481 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:16.481 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:16.481 fio-3.35 00:42:16.481 Starting 2 threads 00:42:26.446 00:42:26.446 filename0: (groupid=0, jobs=1): err= 0: pid=636572: Fri Dec 13 05:58:26 2024 00:42:26.446 read: IOPS=191, BW=766KiB/s (784kB/s)(7680KiB/10025msec) 00:42:26.446 slat (nsec): min=6025, max=25477, avg=7359.89, stdev=2070.60 00:42:26.446 clat (usec): min=378, max=42569, avg=20863.43, stdev=20353.91 00:42:26.446 lat (usec): min=384, max=42576, avg=20870.79, stdev=20353.35 00:42:26.446 clat percentiles (usec): 00:42:26.446 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 437], 00:42:26.446 | 30.00th=[ 545], 40.00th=[ 603], 50.00th=[ 1057], 60.00th=[40633], 00:42:26.446 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:42:26.446 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:26.446 | 99.99th=[42730] 00:42:26.446 bw ( KiB/s): min= 704, max= 832, per=49.68%, avg=766.40, stdev=30.22, samples=20 00:42:26.446 iops : min= 176, max= 208, avg=191.60, stdev= 7.56, samples=20 00:42:26.446 lat (usec) : 500=28.70%, 750=19.01%, 1000=1.88% 00:42:26.446 lat (msec) : 2=0.42%, 50=50.00% 00:42:26.446 cpu : usr=96.58%, sys=3.17%, ctx=8, majf=0, minf=9 00:42:26.446 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:26.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:26.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:26.446 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:26.446 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:26.446 filename1: (groupid=0, jobs=1): err= 0: pid=636573: Fri Dec 13 05:58:26 2024 00:42:26.446 read: IOPS=194, BW=776KiB/s (795kB/s)(7776KiB/10018msec) 00:42:26.446 slat (nsec): min=6020, max=26891, avg=7352.53, stdev=2083.28 00:42:26.446 clat (usec): min=378, max=42590, avg=20591.20, stdev=20413.68 00:42:26.446 lat (usec): min=384, max=42597, avg=20598.55, stdev=20413.15 00:42:26.446 clat percentiles (usec): 00:42:26.446 | 1.00th=[ 392], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 416], 00:42:26.446 | 30.00th=[ 445], 40.00th=[ 611], 50.00th=[ 988], 60.00th=[40633], 00:42:26.446 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:42:26.446 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:26.446 | 99.99th=[42730] 00:42:26.446 bw ( KiB/s): min= 704, max= 896, per=50.33%, avg=776.00, stdev=45.11, samples=20 00:42:26.446 iops : min= 176, max= 224, avg=194.00, stdev=11.28, samples=20 00:42:26.446 lat (usec) : 500=31.38%, 750=14.92%, 1000=3.86% 00:42:26.446 lat (msec) : 2=0.67%, 50=49.18% 00:42:26.446 cpu : usr=96.79%, sys=2.95%, ctx=9, majf=0, minf=0 00:42:26.446 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:26.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:26.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:26.446 issued rwts: total=1944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:26.446 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:26.446 00:42:26.446 Run status group 0 (all jobs): 00:42:26.446 READ: bw=1542KiB/s (1579kB/s), 766KiB/s-776KiB/s (784kB/s-795kB/s), io=15.1MiB (15.8MB), run=10018-10025msec 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.446 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:26.447 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:26.447 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:26.447 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:26.447 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.447 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:26.447 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.447 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:26.447 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.447 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:26.447 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.447 00:42:26.447 real 0m11.247s 00:42:26.447 user 0m26.255s 00:42:26.447 sys 0m0.923s 00:42:26.447 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:26.447 05:58:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:26.447 ************************************ 00:42:26.447 END TEST fio_dif_1_multi_subsystems 00:42:26.447 ************************************ 00:42:26.447 05:58:26 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:26.447 05:58:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:26.447 05:58:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:26.447 05:58:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:26.447 ************************************ 00:42:26.447 START TEST fio_dif_rand_params 00:42:26.447 ************************************ 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:26.447 bdev_null0 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:26.447 [2024-12-13 05:58:26.348848] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:26.447 { 00:42:26.447 "params": { 00:42:26.447 "name": "Nvme$subsystem", 00:42:26.447 "trtype": "$TEST_TRANSPORT", 00:42:26.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:26.447 "adrfam": "ipv4", 00:42:26.447 "trsvcid": "$NVMF_PORT", 00:42:26.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:26.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:26.447 "hdgst": ${hdgst:-false}, 00:42:26.447 "ddgst": ${ddgst:-false} 00:42:26.447 }, 00:42:26.447 "method": "bdev_nvme_attach_controller" 00:42:26.447 } 00:42:26.447 EOF 00:42:26.447 )") 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:26.447 "params": { 00:42:26.447 "name": "Nvme0", 00:42:26.447 "trtype": "tcp", 00:42:26.447 "traddr": "10.0.0.2", 00:42:26.447 "adrfam": "ipv4", 00:42:26.447 "trsvcid": "4420", 00:42:26.447 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:26.447 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:26.447 "hdgst": false, 00:42:26.447 "ddgst": false 00:42:26.447 }, 00:42:26.447 "method": "bdev_nvme_attach_controller" 00:42:26.447 }' 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:26.447 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:26.704 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:26.704 ... 00:42:26.704 fio-3.35 00:42:26.704 Starting 3 threads 00:42:33.260 00:42:33.260 filename0: (groupid=0, jobs=1): err= 0: pid=638480: Fri Dec 13 05:58:32 2024 00:42:33.260 read: IOPS=309, BW=38.7MiB/s (40.6MB/s)(195MiB/5042msec) 00:42:33.260 slat (nsec): min=6124, max=57569, avg=19228.71, stdev=6526.24 00:42:33.260 clat (usec): min=3399, max=88598, avg=9636.45, stdev=7053.33 00:42:33.260 lat (usec): min=3407, max=88614, avg=9655.67, stdev=7052.62 00:42:33.260 clat percentiles (usec): 00:42:33.260 | 1.00th=[ 3982], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 7308], 00:42:33.260 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 8979], 00:42:33.260 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[10945], 00:42:33.260 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50070], 99.95th=[88605], 00:42:33.260 | 99.99th=[88605] 00:42:33.260 bw ( KiB/s): min=25088, max=47616, per=33.97%, avg=39936.00, stdev=7611.43, samples=10 00:42:33.260 iops : min= 196, max= 372, avg=312.00, stdev=59.46, samples=10 00:42:33.260 lat (msec) : 4=1.02%, 10=82.39%, 20=13.64%, 50=2.69%, 100=0.26% 00:42:33.260 cpu : usr=94.68%, sys=4.46%, ctx=112, majf=0, minf=110 00:42:33.260 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:33.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.260 issued rwts: total=1562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.260 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:33.260 filename0: (groupid=0, jobs=1): err= 0: pid=638481: Fri Dec 13 05:58:32 2024 00:42:33.260 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(186MiB/5045msec) 00:42:33.260 slat (nsec): min=6276, max=54600, avg=16418.77, stdev=7958.55 00:42:33.260 clat (usec): min=3764, max=52318, avg=10117.54, stdev=6547.31 00:42:33.260 lat (usec): min=3770, max=52347, avg=10133.96, stdev=6547.72 00:42:33.260 clat percentiles (usec): 00:42:33.260 | 1.00th=[ 4686], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7177], 00:42:33.260 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10028], 00:42:33.260 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11600], 95.00th=[12125], 00:42:33.260 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51643], 99.95th=[52167], 00:42:33.260 | 99.99th=[52167] 00:42:33.260 bw ( KiB/s): min=33024, max=44800, per=32.38%, avg=38067.20, stdev=3842.94, samples=10 00:42:33.260 iops : min= 258, max= 350, avg=297.40, stdev=30.02, samples=10 00:42:33.260 lat (msec) : 4=0.27%, 10=60.38%, 20=36.80%, 50=1.81%, 100=0.74% 00:42:33.260 cpu : usr=96.65%, sys=3.03%, ctx=7, majf=0, minf=60 00:42:33.260 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:33.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.260 issued rwts: total=1489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.260 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:33.260 filename0: (groupid=0, jobs=1): err= 0: pid=638482: Fri Dec 13 05:58:32 2024 00:42:33.260 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(198MiB/5045msec) 00:42:33.260 slat (nsec): min=6336, max=47818, avg=16485.04, stdev=7853.80 00:42:33.260 clat (usec): min=2941, max=53444, avg=9521.96, stdev=5914.79 00:42:33.260 lat (usec): min=2948, max=53470, avg=9538.44, stdev=5915.19 00:42:33.260 clat percentiles (usec): 00:42:33.260 | 1.00th=[ 3490], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 7046], 00:42:33.260 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9372], 00:42:33.260 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11207], 95.00th=[11994], 00:42:33.260 | 99.00th=[49021], 99.50th=[50594], 99.90th=[51643], 99.95th=[53216], 00:42:33.260 | 99.99th=[53216] 00:42:33.260 bw ( KiB/s): min=35840, max=47360, per=34.41%, avg=40448.00, stdev=3282.84, samples=10 00:42:33.260 iops : min= 280, max= 370, avg=316.00, stdev=25.65, samples=10 00:42:33.260 lat (msec) : 4=1.90%, 10=72.25%, 20=23.83%, 50=1.45%, 100=0.57% 00:42:33.260 cpu : usr=96.61%, sys=3.07%, ctx=6, majf=0, minf=133 00:42:33.260 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:33.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.261 issued rwts: total=1582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.261 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:33.261 00:42:33.261 Run status group 0 (all jobs): 00:42:33.261 READ: bw=115MiB/s (120MB/s), 36.9MiB/s-39.2MiB/s (38.7MB/s-41.1MB/s), io=579MiB (607MB), run=5042-5045msec 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 bdev_null0 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 [2024-12-13 05:58:32.678461] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 bdev_null1 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 bdev_null2 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:33.261 { 00:42:33.261 "params": { 00:42:33.261 "name": "Nvme$subsystem", 00:42:33.261 "trtype": "$TEST_TRANSPORT", 00:42:33.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:33.261 "adrfam": "ipv4", 00:42:33.261 "trsvcid": "$NVMF_PORT", 00:42:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:33.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:33.261 "hdgst": ${hdgst:-false}, 00:42:33.261 "ddgst": ${ddgst:-false} 00:42:33.261 }, 00:42:33.261 "method": "bdev_nvme_attach_controller" 00:42:33.261 } 00:42:33.261 EOF 00:42:33.261 )") 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:33.261 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:33.261 { 00:42:33.261 "params": { 00:42:33.261 "name": "Nvme$subsystem", 00:42:33.261 "trtype": "$TEST_TRANSPORT", 00:42:33.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:33.262 "adrfam": "ipv4", 00:42:33.262 "trsvcid": "$NVMF_PORT", 00:42:33.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:33.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:33.262 "hdgst": ${hdgst:-false}, 00:42:33.262 "ddgst": ${ddgst:-false} 00:42:33.262 }, 00:42:33.262 "method": "bdev_nvme_attach_controller" 00:42:33.262 } 00:42:33.262 EOF 00:42:33.262 )") 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:33.262 { 00:42:33.262 "params": { 00:42:33.262 "name": "Nvme$subsystem", 00:42:33.262 "trtype": "$TEST_TRANSPORT", 00:42:33.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:33.262 "adrfam": "ipv4", 00:42:33.262 "trsvcid": "$NVMF_PORT", 00:42:33.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:33.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:33.262 "hdgst": ${hdgst:-false}, 00:42:33.262 "ddgst": ${ddgst:-false} 00:42:33.262 }, 00:42:33.262 "method": "bdev_nvme_attach_controller" 00:42:33.262 } 00:42:33.262 EOF 00:42:33.262 )") 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:33.262 "params": { 00:42:33.262 "name": "Nvme0", 00:42:33.262 "trtype": "tcp", 00:42:33.262 "traddr": "10.0.0.2", 00:42:33.262 "adrfam": "ipv4", 00:42:33.262 "trsvcid": "4420", 00:42:33.262 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:33.262 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:33.262 "hdgst": false, 00:42:33.262 "ddgst": false 00:42:33.262 }, 00:42:33.262 "method": "bdev_nvme_attach_controller" 00:42:33.262 },{ 00:42:33.262 "params": { 00:42:33.262 "name": "Nvme1", 00:42:33.262 "trtype": "tcp", 00:42:33.262 "traddr": "10.0.0.2", 00:42:33.262 "adrfam": "ipv4", 00:42:33.262 "trsvcid": "4420", 00:42:33.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:33.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:33.262 "hdgst": false, 00:42:33.262 "ddgst": false 00:42:33.262 }, 00:42:33.262 "method": "bdev_nvme_attach_controller" 00:42:33.262 },{ 00:42:33.262 "params": { 00:42:33.262 "name": "Nvme2", 00:42:33.262 "trtype": "tcp", 00:42:33.262 "traddr": "10.0.0.2", 00:42:33.262 "adrfam": "ipv4", 00:42:33.262 "trsvcid": "4420", 00:42:33.262 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:33.262 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:33.262 "hdgst": false, 00:42:33.262 "ddgst": false 00:42:33.262 }, 00:42:33.262 "method": "bdev_nvme_attach_controller" 00:42:33.262 }' 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:33.262 05:58:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.262 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:33.262 ... 00:42:33.262 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:33.262 ... 00:42:33.262 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:33.262 ... 00:42:33.262 fio-3.35 00:42:33.262 Starting 24 threads 00:42:45.451 00:42:45.451 filename0: (groupid=0, jobs=1): err= 0: pid=639515: Fri Dec 13 05:58:44 2024 00:42:45.451 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10142msec) 00:42:45.451 slat (nsec): min=7567, max=80491, avg=11600.12, stdev=7198.95 00:42:45.451 clat (msec): min=80, max=423, avg=240.64, stdev=53.04 00:42:45.451 lat (msec): min=80, max=423, avg=240.65, stdev=53.04 00:42:45.451 clat percentiles (msec): 00:42:45.451 | 1.00th=[ 81], 5.00th=[ 148], 10.00th=[ 178], 20.00th=[ 226], 00:42:45.451 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 236], 00:42:45.451 | 70.00th=[ 262], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 347], 00:42:45.451 | 99.00th=[ 397], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:42:45.451 | 99.99th=[ 426] 00:42:45.451 bw ( KiB/s): min= 176, max= 384, per=4.46%, avg=262.40, stdev=50.44, samples=20 00:42:45.451 iops : min= 44, max= 96, avg=65.60, stdev=12.61, samples=20 00:42:45.451 lat (msec) : 100=2.38%, 250=61.90%, 500=35.71% 00:42:45.451 cpu : usr=98.77%, sys=0.83%, ctx=13, majf=0, minf=42 00:42:45.451 IO depths : 1=0.4%, 2=1.6%, 4=9.4%, 8=76.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:42:45.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.451 complete : 0=0.0%, 4=89.6%, 8=5.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.451 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.451 filename0: (groupid=0, jobs=1): err= 0: pid=639516: Fri Dec 13 05:58:44 2024 00:42:45.451 read: IOPS=44, BW=177KiB/s (182kB/s)(1792KiB/10106msec) 00:42:45.451 slat (nsec): min=7523, max=26095, avg=9704.15, stdev=2560.01 00:42:45.451 clat (msec): min=227, max=654, avg=360.83, stdev=70.68 00:42:45.451 lat (msec): min=227, max=654, avg=360.84, stdev=70.68 00:42:45.451 clat percentiles (msec): 00:42:45.451 | 1.00th=[ 228], 5.00th=[ 234], 10.00th=[ 264], 20.00th=[ 317], 00:42:45.451 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 359], 00:42:45.451 | 70.00th=[ 380], 80.00th=[ 405], 90.00th=[ 443], 95.00th=[ 527], 00:42:45.451 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 659], 99.95th=[ 659], 00:42:45.451 | 99.99th=[ 659] 00:42:45.451 bw ( KiB/s): min= 112, max= 256, per=3.08%, avg=181.89, stdev=63.60, samples=19 00:42:45.451 iops : min= 28, max= 64, avg=45.47, stdev=15.90, samples=19 00:42:45.451 lat (msec) : 250=8.93%, 500=85.71%, 750=5.36% 00:42:45.451 cpu : usr=98.66%, sys=0.92%, ctx=7, majf=0, minf=26 00:42:45.451 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:42:45.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.451 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.451 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.451 filename0: (groupid=0, jobs=1): err= 0: pid=639518: Fri Dec 13 05:58:44 2024 00:42:45.451 read: IOPS=64, BW=260KiB/s (266kB/s)(2632KiB/10142msec) 00:42:45.451 slat (nsec): min=7560, max=27091, avg=10486.19, stdev=3365.62 00:42:45.451 clat (msec): min=113, max=415, avg=245.43, stdev=47.62 00:42:45.451 lat (msec): min=113, max=415, avg=245.44, stdev=47.62 00:42:45.451 clat percentiles (msec): 00:42:45.451 | 1.00th=[ 114], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 213], 00:42:45.451 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 236], 60.00th=[ 255], 00:42:45.451 | 70.00th=[ 264], 80.00th=[ 275], 90.00th=[ 305], 95.00th=[ 342], 00:42:45.451 | 99.00th=[ 401], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 418], 00:42:45.451 | 99.99th=[ 418] 00:42:45.451 bw ( KiB/s): min= 176, max= 384, per=4.36%, avg=256.80, stdev=53.31, samples=20 00:42:45.451 iops : min= 44, max= 96, avg=64.20, stdev=13.33, samples=20 00:42:45.451 lat (msec) : 250=59.57%, 500=40.43% 00:42:45.451 cpu : usr=98.92%, sys=0.68%, ctx=12, majf=0, minf=23 00:42:45.451 IO depths : 1=0.6%, 2=3.2%, 4=13.4%, 8=70.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:42:45.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.451 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.451 issued rwts: total=658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.451 filename0: (groupid=0, jobs=1): err= 0: pid=639519: Fri Dec 13 05:58:44 2024 00:42:45.451 read: IOPS=67, BW=271KiB/s (277kB/s)(2752KiB/10161msec) 00:42:45.451 slat (nsec): min=6374, max=51158, avg=11673.56, stdev=7588.80 00:42:45.451 clat (msec): min=36, max=414, avg=236.01, stdev=60.20 00:42:45.451 lat (msec): min=36, max=414, avg=236.02, stdev=60.19 00:42:45.451 clat percentiles (msec): 00:42:45.451 | 1.00th=[ 37], 5.00th=[ 95], 10.00th=[ 176], 20.00th=[ 224], 00:42:45.451 | 30.00th=[ 228], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 236], 00:42:45.451 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 338], 00:42:45.451 | 99.00th=[ 388], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:42:45.451 | 99.99th=[ 414] 00:42:45.451 bw ( KiB/s): min= 176, max= 512, per=4.57%, avg=268.80, stdev=68.98, samples=20 00:42:45.451 iops : min= 44, max= 128, avg=67.20, stdev=17.25, samples=20 00:42:45.451 lat (msec) : 50=2.33%, 100=4.36%, 250=57.85%, 500=35.47% 00:42:45.451 cpu : usr=98.84%, sys=0.74%, ctx=24, majf=0, minf=37 00:42:45.451 IO depths : 1=0.4%, 2=1.6%, 4=9.3%, 8=76.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:42:45.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.451 complete : 0=0.0%, 4=89.5%, 8=5.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.451 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.451 filename0: (groupid=0, jobs=1): err= 0: pid=639520: Fri Dec 13 05:58:44 2024 00:42:45.451 read: IOPS=61, BW=248KiB/s (254kB/s)(2504KiB/10108msec) 00:42:45.451 slat (nsec): min=7128, max=35619, avg=9977.19, stdev=3316.58 00:42:45.451 clat (msec): min=175, max=527, avg=257.62, stdev=46.93 00:42:45.451 lat (msec): min=175, max=527, avg=257.63, stdev=46.93 00:42:45.451 clat percentiles (msec): 00:42:45.451 | 1.00th=[ 211], 5.00th=[ 222], 10.00th=[ 228], 20.00th=[ 230], 00:42:45.451 | 30.00th=[ 232], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 259], 00:42:45.451 | 70.00th=[ 264], 80.00th=[ 279], 90.00th=[ 317], 95.00th=[ 347], 00:42:45.451 | 99.00th=[ 456], 99.50th=[ 456], 99.90th=[ 527], 99.95th=[ 527], 00:42:45.451 | 99.99th=[ 527] 00:42:45.451 bw ( KiB/s): min= 112, max= 304, per=4.16%, avg=244.00, stdev=49.21, samples=20 00:42:45.452 iops : min= 28, max= 76, avg=61.00, stdev=12.30, samples=20 00:42:45.452 lat (msec) : 250=56.23%, 500=43.45%, 750=0.32% 00:42:45.452 cpu : usr=98.75%, sys=0.83%, ctx=12, majf=0, minf=32 00:42:45.452 IO depths : 1=0.6%, 2=1.6%, 4=8.9%, 8=76.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:42:45.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 complete : 0=0.0%, 4=89.5%, 8=5.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 issued rwts: total=626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.452 filename0: (groupid=0, jobs=1): err= 0: pid=639521: Fri Dec 13 05:58:44 2024 00:42:45.452 read: IOPS=61, BW=244KiB/s (250kB/s)(2472KiB/10117msec) 00:42:45.452 slat (nsec): min=7516, max=31651, avg=9777.71, stdev=2738.78 00:42:45.452 clat (msec): min=181, max=432, avg=261.38, stdev=49.17 00:42:45.452 lat (msec): min=181, max=432, avg=261.39, stdev=49.17 00:42:45.452 clat percentiles (msec): 00:42:45.452 | 1.00th=[ 182], 5.00th=[ 209], 10.00th=[ 215], 20.00th=[ 230], 00:42:45.452 | 30.00th=[ 232], 40.00th=[ 236], 50.00th=[ 243], 60.00th=[ 255], 00:42:45.452 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 338], 95.00th=[ 363], 00:42:45.452 | 99.00th=[ 401], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:42:45.452 | 99.99th=[ 435] 00:42:45.452 bw ( KiB/s): min= 128, max= 336, per=4.09%, avg=240.80, stdev=47.14, samples=20 00:42:45.452 iops : min= 32, max= 84, avg=60.20, stdev=11.79, samples=20 00:42:45.452 lat (msec) : 250=53.40%, 500=46.60% 00:42:45.452 cpu : usr=98.73%, sys=0.86%, ctx=13, majf=0, minf=22 00:42:45.452 IO depths : 1=0.5%, 2=1.1%, 4=7.3%, 8=78.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:42:45.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 complete : 0=0.0%, 4=88.8%, 8=6.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 issued rwts: total=618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.452 filename0: (groupid=0, jobs=1): err= 0: pid=639522: Fri Dec 13 05:58:44 2024 00:42:45.452 read: IOPS=61, BW=247KiB/s (252kB/s)(2496KiB/10125msec) 00:42:45.452 slat (nsec): min=6366, max=30780, avg=10297.01, stdev=3421.91 00:42:45.452 clat (msec): min=188, max=436, avg=258.89, stdev=49.66 00:42:45.452 lat (msec): min=188, max=436, avg=258.90, stdev=49.66 00:42:45.452 clat percentiles (msec): 00:42:45.452 | 1.00th=[ 190], 5.00th=[ 201], 10.00th=[ 205], 20.00th=[ 222], 00:42:45.452 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 236], 60.00th=[ 264], 00:42:45.452 | 70.00th=[ 279], 80.00th=[ 288], 90.00th=[ 347], 95.00th=[ 355], 00:42:45.452 | 99.00th=[ 401], 99.50th=[ 409], 99.90th=[ 439], 99.95th=[ 439], 00:42:45.452 | 99.99th=[ 439] 00:42:45.452 bw ( KiB/s): min= 128, max= 304, per=4.14%, avg=243.20, stdev=44.23, samples=20 00:42:45.452 iops : min= 32, max= 76, avg=60.80, stdev=11.06, samples=20 00:42:45.452 lat (msec) : 250=53.85%, 500=46.15% 00:42:45.452 cpu : usr=98.67%, sys=0.92%, ctx=14, majf=0, minf=35 00:42:45.452 IO depths : 1=0.2%, 2=0.5%, 4=6.2%, 8=80.1%, 16=13.0%, 32=0.0%, >=64=0.0% 00:42:45.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 complete : 0=0.0%, 4=88.5%, 8=6.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.452 filename0: (groupid=0, jobs=1): err= 0: pid=639523: Fri Dec 13 05:58:44 2024 00:42:45.452 read: IOPS=62, BW=251KiB/s (257kB/s)(2536KiB/10120msec) 00:42:45.452 slat (nsec): min=4909, max=32142, avg=9932.91, stdev=3130.32 00:42:45.452 clat (msec): min=187, max=406, avg=255.08, stdev=39.61 00:42:45.452 lat (msec): min=187, max=406, avg=255.09, stdev=39.61 00:42:45.452 clat percentiles (msec): 00:42:45.452 | 1.00th=[ 199], 5.00th=[ 207], 10.00th=[ 213], 20.00th=[ 230], 00:42:45.452 | 30.00th=[ 234], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 257], 00:42:45.452 | 70.00th=[ 264], 80.00th=[ 284], 90.00th=[ 338], 95.00th=[ 342], 00:42:45.452 | 99.00th=[ 359], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:42:45.452 | 99.99th=[ 405] 00:42:45.452 bw ( KiB/s): min= 128, max= 336, per=4.21%, avg=247.20, stdev=46.28, samples=20 00:42:45.452 iops : min= 32, max= 84, avg=61.80, stdev=11.57, samples=20 00:42:45.452 lat (msec) : 250=53.63%, 500=46.37% 00:42:45.452 cpu : usr=98.70%, sys=0.90%, ctx=12, majf=0, minf=35 00:42:45.452 IO depths : 1=0.6%, 2=2.1%, 4=10.1%, 8=75.1%, 16=12.1%, 32=0.0%, >=64=0.0% 00:42:45.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 complete : 0=0.0%, 4=89.7%, 8=5.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 issued rwts: total=634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.452 filename1: (groupid=0, jobs=1): err= 0: pid=639524: Fri Dec 13 05:58:44 2024 00:42:45.452 read: IOPS=42, BW=171KiB/s (175kB/s)(1728KiB/10106msec) 00:42:45.452 slat (nsec): min=5406, max=30973, avg=9719.90, stdev=3656.12 00:42:45.452 clat (msec): min=209, max=555, avg=373.47, stdev=56.20 00:42:45.452 lat (msec): min=209, max=555, avg=373.48, stdev=56.20 00:42:45.452 clat percentiles (msec): 00:42:45.452 | 1.00th=[ 234], 5.00th=[ 296], 10.00th=[ 330], 20.00th=[ 338], 00:42:45.452 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 380], 00:42:45.452 | 70.00th=[ 388], 80.00th=[ 414], 90.00th=[ 435], 95.00th=[ 468], 00:42:45.452 | 99.00th=[ 558], 99.50th=[ 558], 99.90th=[ 558], 99.95th=[ 558], 00:42:45.452 | 99.99th=[ 558] 00:42:45.452 bw ( KiB/s): min= 128, max= 256, per=2.98%, avg=175.16, stdev=58.54, samples=19 00:42:45.452 iops : min= 32, max= 64, avg=43.79, stdev=14.63, samples=19 00:42:45.452 lat (msec) : 250=1.85%, 500=93.52%, 750=4.63% 00:42:45.452 cpu : usr=98.92%, sys=0.68%, ctx=13, majf=0, minf=26 00:42:45.452 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:42:45.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.452 filename1: (groupid=0, jobs=1): err= 0: pid=639525: Fri Dec 13 05:58:44 2024 00:42:45.452 read: IOPS=63, BW=254KiB/s (260kB/s)(2568KiB/10119msec) 00:42:45.452 slat (nsec): min=6561, max=30281, avg=9724.81, stdev=2928.90 00:42:45.452 clat (msec): min=190, max=361, avg=251.23, stdev=33.07 00:42:45.452 lat (msec): min=190, max=361, avg=251.24, stdev=33.07 00:42:45.452 clat percentiles (msec): 00:42:45.452 | 1.00th=[ 197], 5.00th=[ 209], 10.00th=[ 226], 20.00th=[ 230], 00:42:45.452 | 30.00th=[ 232], 40.00th=[ 234], 50.00th=[ 239], 60.00th=[ 255], 00:42:45.452 | 70.00th=[ 264], 80.00th=[ 275], 90.00th=[ 288], 95.00th=[ 338], 00:42:45.452 | 99.00th=[ 342], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:42:45.452 | 99.99th=[ 363] 00:42:45.452 bw ( KiB/s): min= 128, max= 336, per=4.26%, avg=250.40, stdev=43.83, samples=20 00:42:45.452 iops : min= 32, max= 84, avg=62.60, stdev=10.96, samples=20 00:42:45.452 lat (msec) : 250=57.94%, 500=42.06% 00:42:45.452 cpu : usr=98.76%, sys=0.83%, ctx=12, majf=0, minf=32 00:42:45.452 IO depths : 1=0.6%, 2=1.4%, 4=8.4%, 8=77.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:42:45.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 complete : 0=0.0%, 4=89.3%, 8=5.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.452 filename1: (groupid=0, jobs=1): err= 0: pid=639526: Fri Dec 13 05:58:44 2024 00:42:45.452 read: IOPS=64, BW=258KiB/s (264kB/s)(2608KiB/10127msec) 00:42:45.452 slat (nsec): min=7518, max=32398, avg=11311.52, stdev=3985.23 00:42:45.452 clat (msec): min=164, max=378, avg=247.95, stdev=33.05 00:42:45.452 lat (msec): min=164, max=378, avg=247.96, stdev=33.05 00:42:45.452 clat percentiles (msec): 00:42:45.452 | 1.00th=[ 186], 5.00th=[ 203], 10.00th=[ 218], 20.00th=[ 230], 00:42:45.452 | 30.00th=[ 232], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 243], 00:42:45.452 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 288], 00:42:45.452 | 99.00th=[ 347], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:42:45.452 | 99.99th=[ 380] 00:42:45.452 bw ( KiB/s): min= 176, max= 368, per=4.33%, avg=254.40, stdev=34.78, samples=20 00:42:45.452 iops : min= 44, max= 92, avg=63.60, stdev= 8.70, samples=20 00:42:45.452 lat (msec) : 250=61.96%, 500=38.04% 00:42:45.452 cpu : usr=98.58%, sys=1.02%, ctx=14, majf=0, minf=32 00:42:45.452 IO depths : 1=0.3%, 2=4.3%, 4=18.1%, 8=65.0%, 16=12.3%, 32=0.0%, >=64=0.0% 00:42:45.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 complete : 0=0.0%, 4=92.2%, 8=2.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 issued rwts: total=652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.452 filename1: (groupid=0, jobs=1): err= 0: pid=639528: Fri Dec 13 05:58:44 2024 00:42:45.452 read: IOPS=61, BW=248KiB/s (254kB/s)(2504KiB/10106msec) 00:42:45.452 slat (nsec): min=7517, max=35764, avg=9719.98, stdev=3112.21 00:42:45.452 clat (msec): min=194, max=660, avg=257.29, stdev=58.48 00:42:45.452 lat (msec): min=194, max=660, avg=257.30, stdev=58.49 00:42:45.452 clat percentiles (msec): 00:42:45.452 | 1.00th=[ 201], 5.00th=[ 211], 10.00th=[ 228], 20.00th=[ 228], 00:42:45.452 | 30.00th=[ 230], 40.00th=[ 234], 50.00th=[ 239], 60.00th=[ 259], 00:42:45.452 | 70.00th=[ 264], 80.00th=[ 275], 90.00th=[ 288], 95.00th=[ 334], 00:42:45.452 | 99.00th=[ 558], 99.50th=[ 558], 99.90th=[ 659], 99.95th=[ 659], 00:42:45.452 | 99.99th=[ 659] 00:42:45.452 bw ( KiB/s): min= 176, max= 336, per=4.36%, avg=256.84, stdev=33.93, samples=19 00:42:45.452 iops : min= 44, max= 84, avg=64.21, stdev= 8.48, samples=19 00:42:45.452 lat (msec) : 250=57.19%, 500=40.26%, 750=2.56% 00:42:45.452 cpu : usr=98.68%, sys=0.91%, ctx=13, majf=0, minf=30 00:42:45.452 IO depths : 1=0.3%, 2=1.3%, 4=8.9%, 8=77.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:42:45.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 complete : 0=0.0%, 4=89.5%, 8=5.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.452 issued rwts: total=626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.452 filename1: (groupid=0, jobs=1): err= 0: pid=639529: Fri Dec 13 05:58:44 2024 00:42:45.452 read: IOPS=62, BW=249KiB/s (255kB/s)(2520KiB/10106msec) 00:42:45.452 slat (nsec): min=7557, max=60098, avg=9898.82, stdev=3851.99 00:42:45.452 clat (msec): min=197, max=554, avg=255.99, stdev=55.27 00:42:45.452 lat (msec): min=197, max=554, avg=256.00, stdev=55.27 00:42:45.452 clat percentiles (msec): 00:42:45.452 | 1.00th=[ 199], 5.00th=[ 211], 10.00th=[ 228], 20.00th=[ 230], 00:42:45.452 | 30.00th=[ 232], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 253], 00:42:45.452 | 70.00th=[ 264], 80.00th=[ 275], 90.00th=[ 288], 95.00th=[ 326], 00:42:45.452 | 99.00th=[ 558], 99.50th=[ 558], 99.90th=[ 558], 99.95th=[ 558], 00:42:45.452 | 99.99th=[ 558] 00:42:45.453 bw ( KiB/s): min= 176, max= 368, per=4.40%, avg=258.53, stdev=46.58, samples=19 00:42:45.453 iops : min= 44, max= 92, avg=64.63, stdev=11.64, samples=19 00:42:45.453 lat (msec) : 250=58.73%, 500=38.73%, 750=2.54% 00:42:45.453 cpu : usr=98.49%, sys=1.12%, ctx=13, majf=0, minf=41 00:42:45.453 IO depths : 1=0.6%, 2=2.2%, 4=11.0%, 8=74.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:42:45.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 complete : 0=0.0%, 4=90.1%, 8=4.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.453 filename1: (groupid=0, jobs=1): err= 0: pid=639530: Fri Dec 13 05:58:44 2024 00:42:45.453 read: IOPS=66, BW=267KiB/s (274kB/s)(2712KiB/10142msec) 00:42:45.453 slat (nsec): min=7509, max=80877, avg=12644.48, stdev=8593.28 00:42:45.453 clat (msec): min=63, max=366, avg=238.91, stdev=45.25 00:42:45.453 lat (msec): min=63, max=366, avg=238.93, stdev=45.25 00:42:45.453 clat percentiles (msec): 00:42:45.453 | 1.00th=[ 64], 5.00th=[ 171], 10.00th=[ 224], 20.00th=[ 228], 00:42:45.453 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 236], 60.00th=[ 239], 00:42:45.453 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 284], 95.00th=[ 288], 00:42:45.453 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 368], 99.95th=[ 368], 00:42:45.453 | 99.99th=[ 368] 00:42:45.453 bw ( KiB/s): min= 176, max= 384, per=4.50%, avg=264.80, stdev=42.64, samples=20 00:42:45.453 iops : min= 44, max= 96, avg=66.20, stdev=10.66, samples=20 00:42:45.453 lat (msec) : 100=4.42%, 250=60.47%, 500=35.10% 00:42:45.453 cpu : usr=98.74%, sys=0.86%, ctx=15, majf=0, minf=31 00:42:45.453 IO depths : 1=1.3%, 2=3.2%, 4=11.9%, 8=72.3%, 16=11.2%, 32=0.0%, >=64=0.0% 00:42:45.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 complete : 0=0.0%, 4=90.3%, 8=4.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.453 filename1: (groupid=0, jobs=1): err= 0: pid=639531: Fri Dec 13 05:58:44 2024 00:42:45.453 read: IOPS=66, BW=267KiB/s (274kB/s)(2712KiB/10142msec) 00:42:45.453 slat (nsec): min=7570, max=31255, avg=9575.37, stdev=2509.84 00:42:45.453 clat (msec): min=113, max=287, avg=238.08, stdev=29.55 00:42:45.453 lat (msec): min=113, max=287, avg=238.09, stdev=29.55 00:42:45.453 clat percentiles (msec): 00:42:45.453 | 1.00th=[ 113], 5.00th=[ 188], 10.00th=[ 218], 20.00th=[ 228], 00:42:45.453 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 236], 00:42:45.453 | 70.00th=[ 253], 80.00th=[ 264], 90.00th=[ 275], 95.00th=[ 284], 00:42:45.453 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 288], 99.95th=[ 288], 00:42:45.453 | 99.99th=[ 288] 00:42:45.453 bw ( KiB/s): min= 176, max= 336, per=4.50%, avg=264.80, stdev=37.60, samples=20 00:42:45.453 iops : min= 44, max= 84, avg=66.20, stdev= 9.40, samples=20 00:42:45.453 lat (msec) : 250=69.32%, 500=30.68% 00:42:45.453 cpu : usr=98.55%, sys=1.05%, ctx=12, majf=0, minf=39 00:42:45.453 IO depths : 1=0.1%, 2=0.6%, 4=7.5%, 8=79.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:42:45.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 complete : 0=0.0%, 4=89.1%, 8=5.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.453 filename1: (groupid=0, jobs=1): err= 0: pid=639532: Fri Dec 13 05:58:44 2024 00:42:45.453 read: IOPS=44, BW=176KiB/s (181kB/s)(1784KiB/10108msec) 00:42:45.453 slat (nsec): min=5835, max=32260, avg=9773.90, stdev=3134.87 00:42:45.453 clat (msec): min=155, max=568, avg=362.42, stdev=76.89 00:42:45.453 lat (msec): min=155, max=568, avg=362.43, stdev=76.89 00:42:45.453 clat percentiles (msec): 00:42:45.453 | 1.00th=[ 157], 5.00th=[ 232], 10.00th=[ 262], 20.00th=[ 334], 00:42:45.453 | 30.00th=[ 342], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 380], 00:42:45.453 | 70.00th=[ 388], 80.00th=[ 401], 90.00th=[ 439], 95.00th=[ 518], 00:42:45.453 | 99.00th=[ 558], 99.50th=[ 558], 99.90th=[ 567], 99.95th=[ 567], 00:42:45.453 | 99.99th=[ 567] 00:42:45.453 bw ( KiB/s): min= 128, max= 256, per=3.08%, avg=181.05, stdev=59.16, samples=19 00:42:45.453 iops : min= 32, max= 64, avg=45.26, stdev=14.79, samples=19 00:42:45.453 lat (msec) : 250=9.87%, 500=84.30%, 750=5.83% 00:42:45.453 cpu : usr=98.64%, sys=0.96%, ctx=17, majf=0, minf=23 00:42:45.453 IO depths : 1=3.4%, 2=9.6%, 4=25.1%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:42:45.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.453 filename2: (groupid=0, jobs=1): err= 0: pid=639533: Fri Dec 13 05:58:44 2024 00:42:45.453 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10143msec) 00:42:45.453 slat (nsec): min=7551, max=37253, avg=11105.49, stdev=5997.75 00:42:45.453 clat (msec): min=79, max=414, avg=241.20, stdev=49.94 00:42:45.453 lat (msec): min=79, max=414, avg=241.21, stdev=49.94 00:42:45.453 clat percentiles (msec): 00:42:45.453 | 1.00th=[ 81], 5.00th=[ 169], 10.00th=[ 192], 20.00th=[ 224], 00:42:45.453 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 236], 00:42:45.453 | 70.00th=[ 262], 80.00th=[ 266], 90.00th=[ 288], 95.00th=[ 338], 00:42:45.453 | 99.00th=[ 388], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:42:45.453 | 99.99th=[ 414] 00:42:45.453 bw ( KiB/s): min= 176, max= 384, per=4.46%, avg=262.40, stdev=50.70, samples=20 00:42:45.453 iops : min= 44, max= 96, avg=65.60, stdev=12.68, samples=20 00:42:45.453 lat (msec) : 100=2.08%, 250=62.65%, 500=35.27% 00:42:45.453 cpu : usr=98.52%, sys=1.06%, ctx=14, majf=0, minf=30 00:42:45.453 IO depths : 1=0.1%, 2=0.6%, 4=7.1%, 8=79.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:42:45.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 complete : 0=0.0%, 4=88.9%, 8=5.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.453 filename2: (groupid=0, jobs=1): err= 0: pid=639534: Fri Dec 13 05:58:44 2024 00:42:45.453 read: IOPS=62, BW=249KiB/s (255kB/s)(2520KiB/10122msec) 00:42:45.453 slat (nsec): min=4653, max=56261, avg=11488.10, stdev=5455.45 00:42:45.453 clat (msec): min=190, max=423, avg=256.30, stdev=41.67 00:42:45.453 lat (msec): min=190, max=423, avg=256.31, stdev=41.67 00:42:45.453 clat percentiles (msec): 00:42:45.453 | 1.00th=[ 192], 5.00th=[ 197], 10.00th=[ 224], 20.00th=[ 228], 00:42:45.453 | 30.00th=[ 232], 40.00th=[ 234], 50.00th=[ 239], 60.00th=[ 262], 00:42:45.453 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 317], 95.00th=[ 342], 00:42:45.453 | 99.00th=[ 380], 99.50th=[ 422], 99.90th=[ 422], 99.95th=[ 422], 00:42:45.453 | 99.99th=[ 422] 00:42:45.453 bw ( KiB/s): min= 128, max= 368, per=4.17%, avg=245.60, stdev=62.57, samples=20 00:42:45.453 iops : min= 32, max= 92, avg=61.40, stdev=15.64, samples=20 00:42:45.453 lat (msec) : 250=51.75%, 500=48.25% 00:42:45.453 cpu : usr=98.65%, sys=0.94%, ctx=12, majf=0, minf=30 00:42:45.453 IO depths : 1=0.5%, 2=2.5%, 4=11.9%, 8=72.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:42:45.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 complete : 0=0.0%, 4=90.3%, 8=4.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.453 filename2: (groupid=0, jobs=1): err= 0: pid=639535: Fri Dec 13 05:58:44 2024 00:42:45.453 read: IOPS=62, BW=249KiB/s (255kB/s)(2520KiB/10107msec) 00:42:45.453 slat (nsec): min=5149, max=35450, avg=9708.62, stdev=3097.87 00:42:45.453 clat (msec): min=178, max=451, avg=256.18, stdev=46.08 00:42:45.453 lat (msec): min=178, max=451, avg=256.19, stdev=46.08 00:42:45.453 clat percentiles (msec): 00:42:45.453 | 1.00th=[ 180], 5.00th=[ 213], 10.00th=[ 228], 20.00th=[ 230], 00:42:45.453 | 30.00th=[ 234], 40.00th=[ 236], 50.00th=[ 236], 60.00th=[ 255], 00:42:45.453 | 70.00th=[ 264], 80.00th=[ 279], 90.00th=[ 288], 95.00th=[ 347], 00:42:45.453 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:42:45.453 | 99.99th=[ 451] 00:42:45.453 bw ( KiB/s): min= 128, max= 368, per=4.17%, avg=245.60, stdev=58.33, samples=20 00:42:45.453 iops : min= 32, max= 92, avg=61.40, stdev=14.58, samples=20 00:42:45.453 lat (msec) : 250=57.46%, 500=42.54% 00:42:45.453 cpu : usr=98.64%, sys=0.95%, ctx=13, majf=0, minf=27 00:42:45.453 IO depths : 1=0.6%, 2=2.2%, 4=11.0%, 8=74.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:42:45.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 complete : 0=0.0%, 4=90.1%, 8=4.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.453 filename2: (groupid=0, jobs=1): err= 0: pid=639536: Fri Dec 13 05:58:44 2024 00:42:45.453 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10142msec) 00:42:45.453 slat (nsec): min=7572, max=57892, avg=11529.22, stdev=6703.36 00:42:45.453 clat (msec): min=79, max=429, avg=240.09, stdev=52.05 00:42:45.453 lat (msec): min=79, max=429, avg=240.10, stdev=52.05 00:42:45.453 clat percentiles (msec): 00:42:45.453 | 1.00th=[ 81], 5.00th=[ 157], 10.00th=[ 171], 20.00th=[ 226], 00:42:45.453 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 236], 00:42:45.453 | 70.00th=[ 264], 80.00th=[ 275], 90.00th=[ 288], 95.00th=[ 347], 00:42:45.453 | 99.00th=[ 388], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:42:45.453 | 99.99th=[ 430] 00:42:45.453 bw ( KiB/s): min= 176, max= 384, per=4.46%, avg=262.40, stdev=45.08, samples=20 00:42:45.453 iops : min= 44, max= 96, avg=65.60, stdev=11.27, samples=20 00:42:45.453 lat (msec) : 100=2.08%, 250=60.71%, 500=37.20% 00:42:45.453 cpu : usr=98.66%, sys=0.94%, ctx=12, majf=0, minf=55 00:42:45.453 IO depths : 1=0.1%, 2=1.2%, 4=8.9%, 8=77.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:42:45.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 complete : 0=0.0%, 4=89.4%, 8=5.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.453 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.453 filename2: (groupid=0, jobs=1): err= 0: pid=639537: Fri Dec 13 05:58:44 2024 00:42:45.453 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10127msec) 00:42:45.453 slat (nsec): min=7566, max=32031, avg=12529.87, stdev=4349.22 00:42:45.453 clat (msec): min=185, max=347, avg=246.38, stdev=28.58 00:42:45.453 lat (msec): min=185, max=347, avg=246.39, stdev=28.58 00:42:45.453 clat percentiles (msec): 00:42:45.453 | 1.00th=[ 186], 5.00th=[ 209], 10.00th=[ 218], 20.00th=[ 230], 00:42:45.453 | 30.00th=[ 232], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 241], 00:42:45.453 | 70.00th=[ 264], 80.00th=[ 266], 90.00th=[ 284], 95.00th=[ 288], 00:42:45.453 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:42:45.453 | 99.99th=[ 347] 00:42:45.454 bw ( KiB/s): min= 144, max= 368, per=4.34%, avg=256.00, stdev=36.71, samples=20 00:42:45.454 iops : min= 36, max= 92, avg=64.00, stdev= 9.18, samples=20 00:42:45.454 lat (msec) : 250=63.11%, 500=36.89% 00:42:45.454 cpu : usr=98.71%, sys=0.88%, ctx=12, majf=0, minf=37 00:42:45.454 IO depths : 1=1.1%, 2=7.3%, 4=25.0%, 8=55.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:42:45.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.454 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.454 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.454 filename2: (groupid=0, jobs=1): err= 0: pid=639539: Fri Dec 13 05:58:44 2024 00:42:45.454 read: IOPS=63, BW=254KiB/s (260kB/s)(2568KiB/10116msec) 00:42:45.454 slat (nsec): min=7574, max=36439, avg=9834.14, stdev=2900.40 00:42:45.454 clat (msec): min=183, max=357, avg=251.57, stdev=33.93 00:42:45.454 lat (msec): min=183, max=357, avg=251.58, stdev=33.93 00:42:45.454 clat percentiles (msec): 00:42:45.454 | 1.00th=[ 184], 5.00th=[ 209], 10.00th=[ 226], 20.00th=[ 230], 00:42:45.454 | 30.00th=[ 234], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 253], 00:42:45.454 | 70.00th=[ 264], 80.00th=[ 275], 90.00th=[ 288], 95.00th=[ 338], 00:42:45.454 | 99.00th=[ 342], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:42:45.454 | 99.99th=[ 359] 00:42:45.454 bw ( KiB/s): min= 128, max= 336, per=4.26%, avg=250.40, stdev=47.09, samples=20 00:42:45.454 iops : min= 32, max= 84, avg=62.60, stdev=11.77, samples=20 00:42:45.454 lat (msec) : 250=59.19%, 500=40.81% 00:42:45.454 cpu : usr=98.75%, sys=0.83%, ctx=16, majf=0, minf=29 00:42:45.454 IO depths : 1=0.5%, 2=1.2%, 4=8.4%, 8=77.7%, 16=12.1%, 32=0.0%, >=64=0.0% 00:42:45.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.454 complete : 0=0.0%, 4=89.3%, 8=5.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.454 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.454 filename2: (groupid=0, jobs=1): err= 0: pid=639540: Fri Dec 13 05:58:44 2024 00:42:45.454 read: IOPS=58, BW=233KiB/s (238kB/s)(2352KiB/10106msec) 00:42:45.454 slat (nsec): min=7538, max=30208, avg=9770.90, stdev=3038.75 00:42:45.454 clat (msec): min=168, max=554, avg=274.33, stdev=65.38 00:42:45.454 lat (msec): min=168, max=554, avg=274.34, stdev=65.38 00:42:45.454 clat percentiles (msec): 00:42:45.454 | 1.00th=[ 169], 5.00th=[ 228], 10.00th=[ 228], 20.00th=[ 230], 00:42:45.454 | 30.00th=[ 234], 40.00th=[ 239], 50.00th=[ 262], 60.00th=[ 264], 00:42:45.454 | 70.00th=[ 284], 80.00th=[ 313], 90.00th=[ 359], 95.00th=[ 380], 00:42:45.454 | 99.00th=[ 558], 99.50th=[ 558], 99.90th=[ 558], 99.95th=[ 558], 00:42:45.454 | 99.99th=[ 558] 00:42:45.454 bw ( KiB/s): min= 128, max= 304, per=4.09%, avg=240.84, stdev=48.14, samples=19 00:42:45.454 iops : min= 32, max= 76, avg=60.21, stdev=12.04, samples=19 00:42:45.454 lat (msec) : 250=44.22%, 500=53.06%, 750=2.72% 00:42:45.454 cpu : usr=98.75%, sys=0.82%, ctx=34, majf=0, minf=35 00:42:45.454 IO depths : 1=1.2%, 2=3.1%, 4=11.7%, 8=72.6%, 16=11.4%, 32=0.0%, >=64=0.0% 00:42:45.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.454 complete : 0=0.0%, 4=90.2%, 8=4.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.454 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.454 filename2: (groupid=0, jobs=1): err= 0: pid=639541: Fri Dec 13 05:58:44 2024 00:42:45.454 read: IOPS=66, BW=264KiB/s (271kB/s)(2688KiB/10164msec) 00:42:45.454 slat (nsec): min=7036, max=80275, avg=19965.63, stdev=8837.27 00:42:45.454 clat (msec): min=53, max=355, avg=241.33, stdev=44.54 00:42:45.454 lat (msec): min=53, max=355, avg=241.35, stdev=44.54 00:42:45.454 clat percentiles (msec): 00:42:45.454 | 1.00th=[ 54], 5.00th=[ 171], 10.00th=[ 226], 20.00th=[ 228], 00:42:45.454 | 30.00th=[ 232], 40.00th=[ 232], 50.00th=[ 236], 60.00th=[ 236], 00:42:45.454 | 70.00th=[ 264], 80.00th=[ 266], 90.00th=[ 284], 95.00th=[ 305], 00:42:45.454 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 355], 99.95th=[ 355], 00:42:45.454 | 99.99th=[ 355] 00:42:45.454 bw ( KiB/s): min= 144, max= 384, per=4.46%, avg=262.40, stdev=46.55, samples=20 00:42:45.454 iops : min= 36, max= 96, avg=65.60, stdev=11.64, samples=20 00:42:45.454 lat (msec) : 100=2.38%, 250=61.90%, 500=35.71% 00:42:45.454 cpu : usr=98.54%, sys=1.04%, ctx=9, majf=0, minf=40 00:42:45.454 IO depths : 1=0.7%, 2=7.0%, 4=25.0%, 8=55.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:42:45.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.454 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.454 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:45.454 00:42:45.454 Run status group 0 (all jobs): 00:42:45.454 READ: bw=5869KiB/s (6010kB/s), 171KiB/s-271KiB/s (175kB/s-277kB/s), io=58.3MiB (61.1MB), run=10106-10164msec 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.454 bdev_null0 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.454 [2024-12-13 05:58:44.320038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:45.454 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.455 bdev_null1 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:45.455 { 00:42:45.455 "params": { 00:42:45.455 "name": "Nvme$subsystem", 00:42:45.455 "trtype": "$TEST_TRANSPORT", 00:42:45.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:45.455 "adrfam": "ipv4", 00:42:45.455 "trsvcid": "$NVMF_PORT", 00:42:45.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:45.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:45.455 "hdgst": ${hdgst:-false}, 00:42:45.455 "ddgst": ${ddgst:-false} 00:42:45.455 }, 00:42:45.455 "method": "bdev_nvme_attach_controller" 00:42:45.455 } 00:42:45.455 EOF 00:42:45.455 )") 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:45.455 { 00:42:45.455 "params": { 00:42:45.455 "name": "Nvme$subsystem", 00:42:45.455 "trtype": "$TEST_TRANSPORT", 00:42:45.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:45.455 "adrfam": "ipv4", 00:42:45.455 "trsvcid": "$NVMF_PORT", 00:42:45.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:45.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:45.455 "hdgst": ${hdgst:-false}, 00:42:45.455 "ddgst": ${ddgst:-false} 00:42:45.455 }, 00:42:45.455 "method": "bdev_nvme_attach_controller" 00:42:45.455 } 00:42:45.455 EOF 00:42:45.455 )") 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:45.455 "params": { 00:42:45.455 "name": "Nvme0", 00:42:45.455 "trtype": "tcp", 00:42:45.455 "traddr": "10.0.0.2", 00:42:45.455 "adrfam": "ipv4", 00:42:45.455 "trsvcid": "4420", 00:42:45.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:45.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:45.455 "hdgst": false, 00:42:45.455 "ddgst": false 00:42:45.455 }, 00:42:45.455 "method": "bdev_nvme_attach_controller" 00:42:45.455 },{ 00:42:45.455 "params": { 00:42:45.455 "name": "Nvme1", 00:42:45.455 "trtype": "tcp", 00:42:45.455 "traddr": "10.0.0.2", 00:42:45.455 "adrfam": "ipv4", 00:42:45.455 "trsvcid": "4420", 00:42:45.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:45.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:45.455 "hdgst": false, 00:42:45.455 "ddgst": false 00:42:45.455 }, 00:42:45.455 "method": "bdev_nvme_attach_controller" 00:42:45.455 }' 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:45.455 05:58:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:45.455 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:45.455 ... 00:42:45.455 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:45.455 ... 00:42:45.455 fio-3.35 00:42:45.455 Starting 4 threads 00:42:50.721 00:42:50.721 filename0: (groupid=0, jobs=1): err= 0: pid=641422: Fri Dec 13 05:58:50 2024 00:42:50.721 read: IOPS=2770, BW=21.6MiB/s (22.7MB/s)(108MiB/5003msec) 00:42:50.721 slat (nsec): min=6167, max=50520, avg=8925.89, stdev=3011.65 00:42:50.721 clat (usec): min=747, max=5509, avg=2861.40, stdev=373.47 00:42:50.721 lat (usec): min=763, max=5521, avg=2870.33, stdev=373.38 00:42:50.721 clat percentiles (usec): 00:42:50.721 | 1.00th=[ 1713], 5.00th=[ 2212], 10.00th=[ 2409], 20.00th=[ 2606], 00:42:50.721 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2966], 00:42:50.721 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3163], 95.00th=[ 3326], 00:42:50.721 | 99.00th=[ 3851], 99.50th=[ 4113], 99.90th=[ 4817], 99.95th=[ 4948], 00:42:50.721 | 99.99th=[ 5473] 00:42:50.721 bw ( KiB/s): min=21136, max=23424, per=26.15%, avg=22172.80, stdev=795.89, samples=10 00:42:50.721 iops : min= 2642, max= 2928, avg=2771.60, stdev=99.49, samples=10 00:42:50.721 lat (usec) : 750=0.01%, 1000=0.22% 00:42:50.721 lat (msec) : 2=1.77%, 4=97.26%, 10=0.74% 00:42:50.721 cpu : usr=95.26%, sys=4.40%, ctx=9, majf=0, minf=10 00:42:50.721 IO depths : 1=0.3%, 2=4.5%, 4=65.8%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:50.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.721 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.721 issued rwts: total=13863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:50.721 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:50.721 filename0: (groupid=0, jobs=1): err= 0: pid=641423: Fri Dec 13 05:58:50 2024 00:42:50.721 read: IOPS=2592, BW=20.2MiB/s (21.2MB/s)(101MiB/5001msec) 00:42:50.721 slat (nsec): min=6171, max=48156, avg=8844.60, stdev=3195.07 00:42:50.721 clat (usec): min=834, max=5569, avg=3059.76, stdev=382.39 00:42:50.721 lat (usec): min=846, max=5581, avg=3068.61, stdev=382.29 00:42:50.721 clat percentiles (usec): 00:42:50.721 | 1.00th=[ 2147], 5.00th=[ 2573], 10.00th=[ 2769], 20.00th=[ 2933], 00:42:50.721 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:42:50.721 | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3720], 00:42:50.721 | 99.00th=[ 4555], 99.50th=[ 4948], 99.90th=[ 5342], 99.95th=[ 5473], 00:42:50.721 | 99.99th=[ 5538] 00:42:50.721 bw ( KiB/s): min=20104, max=21328, per=24.49%, avg=20765.33, stdev=499.15, samples=9 00:42:50.721 iops : min= 2513, max= 2666, avg=2595.67, stdev=62.39, samples=9 00:42:50.721 lat (usec) : 1000=0.02% 00:42:50.721 lat (msec) : 2=0.62%, 4=96.28%, 10=3.09% 00:42:50.721 cpu : usr=95.90%, sys=3.78%, ctx=9, majf=0, minf=9 00:42:50.722 IO depths : 1=0.2%, 2=2.7%, 4=70.2%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:50.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.722 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.722 issued rwts: total=12963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:50.722 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:50.722 filename1: (groupid=0, jobs=1): err= 0: pid=641424: Fri Dec 13 05:58:50 2024 00:42:50.722 read: IOPS=2588, BW=20.2MiB/s (21.2MB/s)(101MiB/5002msec) 00:42:50.722 slat (nsec): min=6196, max=48829, avg=9086.82, stdev=3330.64 00:42:50.722 clat (usec): min=627, max=5659, avg=3062.12, stdev=421.94 00:42:50.722 lat (usec): min=634, max=5674, avg=3071.20, stdev=421.77 00:42:50.722 clat percentiles (usec): 00:42:50.722 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2900], 00:42:50.722 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:42:50.722 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3458], 95.00th=[ 3818], 00:42:50.722 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5342], 99.95th=[ 5407], 00:42:50.722 | 99.99th=[ 5538] 00:42:50.722 bw ( KiB/s): min=20192, max=21312, per=24.52%, avg=20785.78, stdev=418.26, samples=9 00:42:50.722 iops : min= 2524, max= 2664, avg=2598.22, stdev=52.28, samples=9 00:42:50.722 lat (usec) : 750=0.01% 00:42:50.722 lat (msec) : 2=0.32%, 4=95.54%, 10=4.14% 00:42:50.722 cpu : usr=95.52%, sys=4.14%, ctx=11, majf=0, minf=10 00:42:50.722 IO depths : 1=0.1%, 2=5.3%, 4=67.4%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:50.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.722 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.722 issued rwts: total=12949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:50.722 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:50.722 filename1: (groupid=0, jobs=1): err= 0: pid=641425: Fri Dec 13 05:58:50 2024 00:42:50.722 read: IOPS=2648, BW=20.7MiB/s (21.7MB/s)(103MiB/5002msec) 00:42:50.722 slat (nsec): min=6173, max=46550, avg=9117.39, stdev=3271.35 00:42:50.722 clat (usec): min=678, max=5609, avg=2994.39, stdev=367.44 00:42:50.722 lat (usec): min=685, max=5626, avg=3003.51, stdev=367.41 00:42:50.722 clat percentiles (usec): 00:42:50.722 | 1.00th=[ 2089], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2835], 00:42:50.722 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:42:50.722 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3326], 95.00th=[ 3589], 00:42:50.722 | 99.00th=[ 4424], 99.50th=[ 4555], 99.90th=[ 5145], 99.95th=[ 5276], 00:42:50.722 | 99.99th=[ 5538] 00:42:50.722 bw ( KiB/s): min=20585, max=21488, per=25.12%, avg=21297.00, stdev=292.17, samples=9 00:42:50.722 iops : min= 2573, max= 2686, avg=2662.11, stdev=36.56, samples=9 00:42:50.722 lat (usec) : 750=0.02%, 1000=0.02% 00:42:50.722 lat (msec) : 2=0.69%, 4=97.16%, 10=2.11% 00:42:50.722 cpu : usr=95.72%, sys=3.94%, ctx=8, majf=0, minf=9 00:42:50.722 IO depths : 1=0.1%, 2=3.3%, 4=67.7%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:50.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.722 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.722 issued rwts: total=13246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:50.722 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:50.722 00:42:50.722 Run status group 0 (all jobs): 00:42:50.722 READ: bw=82.8MiB/s (86.8MB/s), 20.2MiB/s-21.6MiB/s (21.2MB/s-22.7MB/s), io=414MiB (434MB), run=5001-5003msec 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.722 00:42:50.722 real 0m24.167s 00:42:50.722 user 4m54.858s 00:42:50.722 sys 0m4.605s 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:50.722 05:58:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:50.722 ************************************ 00:42:50.722 END TEST fio_dif_rand_params 00:42:50.722 ************************************ 00:42:50.722 05:58:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:50.722 05:58:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:50.722 05:58:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:50.722 05:58:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:50.722 ************************************ 00:42:50.722 START TEST fio_dif_digest 00:42:50.722 ************************************ 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:50.722 bdev_null0 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:50.722 [2024-12-13 05:58:50.585221] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:50.722 05:58:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:50.722 { 00:42:50.722 "params": { 00:42:50.722 "name": "Nvme$subsystem", 00:42:50.722 "trtype": "$TEST_TRANSPORT", 00:42:50.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:50.723 "adrfam": "ipv4", 00:42:50.723 "trsvcid": "$NVMF_PORT", 00:42:50.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:50.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:50.723 "hdgst": ${hdgst:-false}, 00:42:50.723 "ddgst": ${ddgst:-false} 00:42:50.723 }, 00:42:50.723 "method": "bdev_nvme_attach_controller" 00:42:50.723 } 00:42:50.723 EOF 00:42:50.723 )") 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:50.723 "params": { 00:42:50.723 "name": "Nvme0", 00:42:50.723 "trtype": "tcp", 00:42:50.723 "traddr": "10.0.0.2", 00:42:50.723 "adrfam": "ipv4", 00:42:50.723 "trsvcid": "4420", 00:42:50.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:50.723 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:50.723 "hdgst": true, 00:42:50.723 "ddgst": true 00:42:50.723 }, 00:42:50.723 "method": "bdev_nvme_attach_controller" 00:42:50.723 }' 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:50.723 05:58:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:50.981 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:50.981 ... 00:42:50.981 fio-3.35 00:42:50.981 Starting 3 threads 00:43:03.181 00:43:03.181 filename0: (groupid=0, jobs=1): err= 0: pid=642540: Fri Dec 13 05:59:01 2024 00:43:03.181 read: IOPS=287, BW=35.9MiB/s (37.7MB/s)(361MiB/10046msec) 00:43:03.181 slat (nsec): min=6416, max=30312, avg=11619.43, stdev=1808.30 00:43:03.181 clat (usec): min=5737, max=49851, avg=10414.98, stdev=1253.15 00:43:03.181 lat (usec): min=5747, max=49863, avg=10426.60, stdev=1253.09 00:43:03.181 clat percentiles (usec): 00:43:03.181 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9765], 00:43:03.181 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:43:03.181 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:43:03.181 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13566], 99.95th=[46924], 00:43:03.181 | 99.99th=[50070] 00:43:03.181 bw ( KiB/s): min=35584, max=37888, per=35.11%, avg=36902.40, stdev=734.83, samples=20 00:43:03.181 iops : min= 278, max= 296, avg=288.30, stdev= 5.74, samples=20 00:43:03.181 lat (msec) : 10=29.80%, 20=70.13%, 50=0.07% 00:43:03.181 cpu : usr=94.44%, sys=5.25%, ctx=20, majf=0, minf=42 00:43:03.181 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:03.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.181 issued rwts: total=2886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.181 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:03.181 filename0: (groupid=0, jobs=1): err= 0: pid=642541: Fri Dec 13 05:59:01 2024 00:43:03.181 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(343MiB/10044msec) 00:43:03.181 slat (nsec): min=6522, max=23302, avg=11731.98, stdev=1720.91 00:43:03.181 clat (usec): min=8399, max=48980, avg=10944.34, stdev=1241.09 00:43:03.181 lat (usec): min=8412, max=48991, avg=10956.07, stdev=1241.09 00:43:03.181 clat percentiles (usec): 00:43:03.181 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:43:03.181 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:43:03.181 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12125], 00:43:03.181 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13304], 99.95th=[45876], 00:43:03.181 | 99.99th=[49021] 00:43:03.181 bw ( KiB/s): min=34304, max=36096, per=33.42%, avg=35123.20, stdev=436.35, samples=20 00:43:03.181 iops : min= 268, max= 282, avg=274.40, stdev= 3.41, samples=20 00:43:03.181 lat (msec) : 10=10.74%, 20=89.18%, 50=0.07% 00:43:03.181 cpu : usr=94.89%, sys=4.81%, ctx=18, majf=0, minf=35 00:43:03.181 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:03.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.181 issued rwts: total=2746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.181 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:03.181 filename0: (groupid=0, jobs=1): err= 0: pid=642542: Fri Dec 13 05:59:01 2024 00:43:03.181 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(327MiB/10044msec) 00:43:03.181 slat (nsec): min=6480, max=25677, avg=11816.19, stdev=1669.07 00:43:03.181 clat (usec): min=7967, max=47378, avg=11484.93, stdev=1219.15 00:43:03.181 lat (usec): min=7979, max=47389, avg=11496.74, stdev=1219.18 00:43:03.181 clat percentiles (usec): 00:43:03.181 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:43:03.181 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:43:03.181 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12780], 00:43:03.181 | 99.00th=[13435], 99.50th=[13698], 99.90th=[16188], 99.95th=[43779], 00:43:03.181 | 99.99th=[47449] 00:43:03.181 bw ( KiB/s): min=32512, max=33792, per=31.85%, avg=33472.00, stdev=379.48, samples=20 00:43:03.181 iops : min= 254, max= 264, avg=261.50, stdev= 2.96, samples=20 00:43:03.181 lat (msec) : 10=2.52%, 20=97.40%, 50=0.08% 00:43:03.181 cpu : usr=94.21%, sys=5.49%, ctx=16, majf=0, minf=65 00:43:03.181 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:03.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.181 issued rwts: total=2617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.181 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:03.181 00:43:03.181 Run status group 0 (all jobs): 00:43:03.181 READ: bw=103MiB/s (108MB/s), 32.6MiB/s-35.9MiB/s (34.2MB/s-37.7MB/s), io=1031MiB (1081MB), run=10044-10046msec 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.181 00:43:03.181 real 0m11.083s 00:43:03.181 user 0m35.416s 00:43:03.181 sys 0m1.836s 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:03.181 05:59:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:03.181 ************************************ 00:43:03.181 END TEST fio_dif_digest 00:43:03.181 ************************************ 00:43:03.181 05:59:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:03.181 05:59:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:03.181 05:59:01 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:03.181 05:59:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:03.181 05:59:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:03.181 05:59:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:03.181 05:59:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:03.181 05:59:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:03.181 rmmod nvme_tcp 00:43:03.181 rmmod nvme_fabrics 00:43:03.181 rmmod nvme_keyring 00:43:03.181 05:59:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:03.181 05:59:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:03.181 05:59:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:03.181 05:59:01 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 634373 ']' 00:43:03.181 05:59:01 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 634373 00:43:03.181 05:59:01 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 634373 ']' 00:43:03.181 05:59:01 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 634373 00:43:03.181 05:59:01 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:03.181 05:59:01 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:03.181 05:59:01 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 634373 00:43:03.181 05:59:01 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:03.181 05:59:01 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:03.181 05:59:01 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 634373' 00:43:03.181 killing process with pid 634373 00:43:03.181 05:59:01 nvmf_dif -- common/autotest_common.sh@973 -- # kill 634373 00:43:03.181 05:59:01 nvmf_dif -- common/autotest_common.sh@978 -- # wait 634373 00:43:03.181 05:59:01 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:03.181 05:59:01 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:04.559 Waiting for block devices as requested 00:43:04.819 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:04.819 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:04.819 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:05.078 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:05.078 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:05.078 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:05.337 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:05.337 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:05.337 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:05.596 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:05.596 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:05.596 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:05.596 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:05.854 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:05.854 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:05.854 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:06.113 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:06.113 05:59:05 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:06.113 05:59:06 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:06.113 05:59:06 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:06.113 05:59:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:06.113 05:59:06 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:06.113 05:59:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:06.113 05:59:06 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:06.113 05:59:06 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:06.113 05:59:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:06.113 05:59:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:06.113 05:59:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:08.649 05:59:08 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:08.649 00:43:08.649 real 1m13.539s 00:43:08.649 user 7m11.757s 00:43:08.649 sys 0m19.666s 00:43:08.649 05:59:08 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:08.649 05:59:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:08.649 ************************************ 00:43:08.649 END TEST nvmf_dif 00:43:08.649 ************************************ 00:43:08.649 05:59:08 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:08.649 05:59:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:08.649 05:59:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:08.649 05:59:08 -- common/autotest_common.sh@10 -- # set +x 00:43:08.649 ************************************ 00:43:08.649 START TEST nvmf_abort_qd_sizes 00:43:08.649 ************************************ 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:08.649 * Looking for test storage... 00:43:08.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:08.649 --rc genhtml_branch_coverage=1 00:43:08.649 --rc genhtml_function_coverage=1 00:43:08.649 --rc genhtml_legend=1 00:43:08.649 --rc geninfo_all_blocks=1 00:43:08.649 --rc geninfo_unexecuted_blocks=1 00:43:08.649 00:43:08.649 ' 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:08.649 --rc genhtml_branch_coverage=1 00:43:08.649 --rc genhtml_function_coverage=1 00:43:08.649 --rc genhtml_legend=1 00:43:08.649 --rc geninfo_all_blocks=1 00:43:08.649 --rc geninfo_unexecuted_blocks=1 00:43:08.649 00:43:08.649 ' 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:08.649 --rc genhtml_branch_coverage=1 00:43:08.649 --rc genhtml_function_coverage=1 00:43:08.649 --rc genhtml_legend=1 00:43:08.649 --rc geninfo_all_blocks=1 00:43:08.649 --rc geninfo_unexecuted_blocks=1 00:43:08.649 00:43:08.649 ' 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:08.649 --rc genhtml_branch_coverage=1 00:43:08.649 --rc genhtml_function_coverage=1 00:43:08.649 --rc genhtml_legend=1 00:43:08.649 --rc geninfo_all_blocks=1 00:43:08.649 --rc geninfo_unexecuted_blocks=1 00:43:08.649 00:43:08.649 ' 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:08.649 05:59:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:08.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:08.650 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:13.923 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:13.923 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:13.923 Found net devices under 0000:af:00.0: cvl_0_0 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:13.923 Found net devices under 0000:af:00.1: cvl_0_1 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:13.923 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:14.182 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:14.182 05:59:13 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:14.182 05:59:14 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:14.182 05:59:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:14.182 05:59:14 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:14.182 05:59:14 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:14.182 05:59:14 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:14.182 05:59:14 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:14.182 05:59:14 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:14.182 05:59:14 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:14.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:14.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:43:14.182 00:43:14.182 --- 10.0.0.2 ping statistics --- 00:43:14.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:14.182 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:43:14.182 05:59:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:14.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:14.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:43:14.182 00:43:14.182 --- 10.0.0.1 ping statistics --- 00:43:14.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:14.182 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:43:14.182 05:59:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:14.182 05:59:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:14.182 05:59:14 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:14.182 05:59:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:17.471 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:17.471 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:18.039 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=650302 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 650302 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 650302 ']' 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:18.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:18.039 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:18.297 [2024-12-13 05:59:18.099503] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:43:18.297 [2024-12-13 05:59:18.099553] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:18.297 [2024-12-13 05:59:18.179794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:18.297 [2024-12-13 05:59:18.204257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:18.297 [2024-12-13 05:59:18.204294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:18.297 [2024-12-13 05:59:18.204301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:18.297 [2024-12-13 05:59:18.204307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:18.297 [2024-12-13 05:59:18.204312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:18.297 [2024-12-13 05:59:18.205787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:18.297 [2024-12-13 05:59:18.205898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:18.297 [2024-12-13 05:59:18.206002] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:18.297 [2024-12-13 05:59:18.206004] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:18.297 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:18.297 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:18.297 05:59:18 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:18.297 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:18.297 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:18.555 05:59:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:18.555 ************************************ 00:43:18.555 START TEST spdk_target_abort 00:43:18.555 ************************************ 00:43:18.555 05:59:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:18.555 05:59:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:18.555 05:59:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:43:18.555 05:59:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.555 05:59:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:21.837 spdk_targetn1 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:21.837 [2024-12-13 05:59:21.208508] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:21.837 [2024-12-13 05:59:21.256801] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:21.837 05:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:25.118 Initializing NVMe Controllers 00:43:25.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:25.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:25.118 Initialization complete. Launching workers. 00:43:25.118 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16629, failed: 0 00:43:25.118 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1295, failed to submit 15334 00:43:25.118 success 703, unsuccessful 592, failed 0 00:43:25.118 05:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:25.118 05:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:28.398 Initializing NVMe Controllers 00:43:28.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:28.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:28.398 Initialization complete. Launching workers. 00:43:28.398 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8510, failed: 0 00:43:28.398 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 7283 00:43:28.398 success 333, unsuccessful 894, failed 0 00:43:28.398 05:59:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:28.398 05:59:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:31.678 Initializing NVMe Controllers 00:43:31.678 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:31.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:31.678 Initialization complete. Launching workers. 00:43:31.678 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38461, failed: 0 00:43:31.678 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2851, failed to submit 35610 00:43:31.678 success 575, unsuccessful 2276, failed 0 00:43:31.678 05:59:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:43:31.678 05:59:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.678 05:59:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:31.678 05:59:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.678 05:59:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:43:31.678 05:59:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.678 05:59:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 650302 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 650302 ']' 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 650302 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 650302 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 650302' 00:43:32.610 killing process with pid 650302 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 650302 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 650302 00:43:32.610 00:43:32.610 real 0m14.154s 00:43:32.610 user 0m54.146s 00:43:32.610 sys 0m2.366s 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:32.610 ************************************ 00:43:32.610 END TEST spdk_target_abort 00:43:32.610 ************************************ 00:43:32.610 05:59:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:43:32.610 05:59:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:32.610 05:59:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:32.610 05:59:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:32.610 ************************************ 00:43:32.610 START TEST kernel_target_abort 00:43:32.610 ************************************ 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:43:32.610 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:43:32.869 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:43:32.869 05:59:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:35.403 Waiting for block devices as requested 00:43:35.403 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:35.661 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:35.661 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:35.661 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:35.661 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:35.920 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:35.920 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:35.921 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:36.180 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:36.180 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:36.180 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:36.439 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:36.439 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:36.439 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:36.439 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:36.698 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:36.698 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:36.698 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:43:36.698 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:43:36.698 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:43:36.698 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:43:36.698 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:36.698 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:43:36.698 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:43:36.698 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:43:36.698 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:43:36.957 No valid GPT data, bailing 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:43:36.957 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:43:36.957 00:43:36.957 Discovery Log Number of Records 2, Generation counter 2 00:43:36.957 =====Discovery Log Entry 0====== 00:43:36.957 trtype: tcp 00:43:36.957 adrfam: ipv4 00:43:36.957 subtype: current discovery subsystem 00:43:36.957 treq: not specified, sq flow control disable supported 00:43:36.957 portid: 1 00:43:36.957 trsvcid: 4420 00:43:36.957 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:43:36.957 traddr: 10.0.0.1 00:43:36.957 eflags: none 00:43:36.957 sectype: none 00:43:36.958 =====Discovery Log Entry 1====== 00:43:36.958 trtype: tcp 00:43:36.958 adrfam: ipv4 00:43:36.958 subtype: nvme subsystem 00:43:36.958 treq: not specified, sq flow control disable supported 00:43:36.958 portid: 1 00:43:36.958 trsvcid: 4420 00:43:36.958 subnqn: nqn.2016-06.io.spdk:testnqn 00:43:36.958 traddr: 10.0.0.1 00:43:36.958 eflags: none 00:43:36.958 sectype: none 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:36.958 05:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:40.238 Initializing NVMe Controllers 00:43:40.238 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:40.238 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:40.238 Initialization complete. Launching workers. 00:43:40.238 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95848, failed: 0 00:43:40.238 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95848, failed to submit 0 00:43:40.238 success 0, unsuccessful 95848, failed 0 00:43:40.238 05:59:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:40.238 05:59:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:43.518 Initializing NVMe Controllers 00:43:43.518 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:43.518 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:43.518 Initialization complete. Launching workers. 00:43:43.518 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 147958, failed: 0 00:43:43.518 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37126, failed to submit 110832 00:43:43.518 success 0, unsuccessful 37126, failed 0 00:43:43.518 05:59:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:43.518 05:59:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:46.799 Initializing NVMe Controllers 00:43:46.799 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:46.799 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:46.799 Initialization complete. Launching workers. 00:43:46.799 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 141822, failed: 0 00:43:46.799 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35518, failed to submit 106304 00:43:46.799 success 0, unsuccessful 35518, failed 0 00:43:46.799 05:59:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:43:46.799 05:59:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:43:46.799 05:59:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:43:46.799 05:59:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:46.799 05:59:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:46.799 05:59:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:43:46.799 05:59:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:46.799 05:59:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:43:46.799 05:59:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:43:46.799 05:59:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:49.334 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:49.334 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:50.270 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:50.270 00:43:50.270 real 0m17.459s 00:43:50.270 user 0m9.166s 00:43:50.270 sys 0m4.991s 00:43:50.270 05:59:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:50.270 05:59:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:50.270 ************************************ 00:43:50.270 END TEST kernel_target_abort 00:43:50.270 ************************************ 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:50.270 rmmod nvme_tcp 00:43:50.270 rmmod nvme_fabrics 00:43:50.270 rmmod nvme_keyring 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 650302 ']' 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 650302 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 650302 ']' 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 650302 00:43:50.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (650302) - No such process 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 650302 is not found' 00:43:50.270 Process with pid 650302 is not found 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:50.270 05:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:52.805 Waiting for block devices as requested 00:43:53.064 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:53.064 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:53.064 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:53.323 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:53.323 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:53.323 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:53.582 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:53.582 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:53.582 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:53.841 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:53.841 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:53.841 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:53.841 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:54.100 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:54.100 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:54.100 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:54.359 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:54.359 05:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:54.359 05:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:54.359 05:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:43:54.359 05:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:43:54.359 05:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:54.359 05:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:43:54.359 05:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:54.359 05:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:54.359 05:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:54.359 05:59:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:54.359 05:59:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:56.894 05:59:56 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:56.894 00:43:56.894 real 0m48.178s 00:43:56.894 user 1m7.646s 00:43:56.894 sys 0m15.996s 00:43:56.894 05:59:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:56.894 05:59:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:56.894 ************************************ 00:43:56.894 END TEST nvmf_abort_qd_sizes 00:43:56.894 ************************************ 00:43:56.894 05:59:56 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:56.894 05:59:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:56.894 05:59:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:56.894 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:43:56.894 ************************************ 00:43:56.894 START TEST keyring_file 00:43:56.894 ************************************ 00:43:56.894 05:59:56 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:56.894 * Looking for test storage... 00:43:56.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:56.895 05:59:56 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:56.895 05:59:56 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:43:56.895 05:59:56 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:56.895 05:59:56 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@345 -- # : 1 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@353 -- # local d=1 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@355 -- # echo 1 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@353 -- # local d=2 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@355 -- # echo 2 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@368 -- # return 0 00:43:56.895 05:59:56 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:56.895 05:59:56 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:56.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.895 --rc genhtml_branch_coverage=1 00:43:56.895 --rc genhtml_function_coverage=1 00:43:56.895 --rc genhtml_legend=1 00:43:56.895 --rc geninfo_all_blocks=1 00:43:56.895 --rc geninfo_unexecuted_blocks=1 00:43:56.895 00:43:56.895 ' 00:43:56.895 05:59:56 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:56.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.895 --rc genhtml_branch_coverage=1 00:43:56.895 --rc genhtml_function_coverage=1 00:43:56.895 --rc genhtml_legend=1 00:43:56.895 --rc geninfo_all_blocks=1 00:43:56.895 --rc geninfo_unexecuted_blocks=1 00:43:56.895 00:43:56.895 ' 00:43:56.895 05:59:56 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:56.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.895 --rc genhtml_branch_coverage=1 00:43:56.895 --rc genhtml_function_coverage=1 00:43:56.895 --rc genhtml_legend=1 00:43:56.895 --rc geninfo_all_blocks=1 00:43:56.895 --rc geninfo_unexecuted_blocks=1 00:43:56.895 00:43:56.895 ' 00:43:56.895 05:59:56 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:56.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.895 --rc genhtml_branch_coverage=1 00:43:56.895 --rc genhtml_function_coverage=1 00:43:56.895 --rc genhtml_legend=1 00:43:56.895 --rc geninfo_all_blocks=1 00:43:56.895 --rc geninfo_unexecuted_blocks=1 00:43:56.895 00:43:56.895 ' 00:43:56.895 05:59:56 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:56.895 05:59:56 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:56.895 05:59:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.895 05:59:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.895 05:59:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.895 05:59:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:56.895 05:59:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@51 -- # : 0 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:56.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:56.895 05:59:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:56.895 05:59:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:56.895 05:59:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:56.895 05:59:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:56.895 05:59:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:56.895 05:59:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OHLfWoflCa 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:56.895 05:59:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OHLfWoflCa 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OHLfWoflCa 00:43:56.895 05:59:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.OHLfWoflCa 00:43:56.895 05:59:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WcJwKUmC02 00:43:56.895 05:59:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:56.896 05:59:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:56.896 05:59:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:56.896 05:59:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:56.896 05:59:56 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:56.896 05:59:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:56.896 05:59:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:56.896 05:59:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WcJwKUmC02 00:43:56.896 05:59:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WcJwKUmC02 00:43:56.896 05:59:56 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.WcJwKUmC02 00:43:56.896 05:59:56 keyring_file -- keyring/file.sh@30 -- # tgtpid=658887 00:43:56.896 05:59:56 keyring_file -- keyring/file.sh@32 -- # waitforlisten 658887 00:43:56.896 05:59:56 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:56.896 05:59:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 658887 ']' 00:43:56.896 05:59:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:56.896 05:59:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:56.896 05:59:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:56.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:56.896 05:59:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:56.896 05:59:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:56.896 [2024-12-13 05:59:56.775484] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:43:56.896 [2024-12-13 05:59:56.775528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658887 ] 00:43:56.896 [2024-12-13 05:59:56.848294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:56.896 [2024-12-13 05:59:56.870222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:57.157 05:59:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:57.157 [2024-12-13 05:59:57.077117] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:57.157 null0 00:43:57.157 [2024-12-13 05:59:57.109166] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:57.157 [2024-12-13 05:59:57.109457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.157 05:59:57 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:57.157 [2024-12-13 05:59:57.141241] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:57.157 request: 00:43:57.157 { 00:43:57.157 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:57.157 "secure_channel": false, 00:43:57.157 "listen_address": { 00:43:57.157 "trtype": "tcp", 00:43:57.157 "traddr": "127.0.0.1", 00:43:57.157 "trsvcid": "4420" 00:43:57.157 }, 00:43:57.157 "method": "nvmf_subsystem_add_listener", 00:43:57.157 "req_id": 1 00:43:57.157 } 00:43:57.157 Got JSON-RPC error response 00:43:57.157 response: 00:43:57.157 { 00:43:57.157 "code": -32602, 00:43:57.157 "message": "Invalid parameters" 00:43:57.157 } 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:57.157 05:59:57 keyring_file -- keyring/file.sh@47 -- # bperfpid=658893 00:43:57.157 05:59:57 keyring_file -- keyring/file.sh@49 -- # waitforlisten 658893 /var/tmp/bperf.sock 00:43:57.157 05:59:57 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 658893 ']' 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:57.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:57.157 05:59:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:57.454 [2024-12-13 05:59:57.196878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:43:57.454 [2024-12-13 05:59:57.196921] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658893 ] 00:43:57.454 [2024-12-13 05:59:57.271747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:57.454 [2024-12-13 05:59:57.294207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:57.454 05:59:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:57.454 05:59:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:57.454 05:59:57 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OHLfWoflCa 00:43:57.454 05:59:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OHLfWoflCa 00:43:57.737 05:59:57 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WcJwKUmC02 00:43:57.737 05:59:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WcJwKUmC02 00:43:58.015 05:59:57 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:43:58.015 05:59:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:58.015 05:59:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:58.015 05:59:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:58.015 05:59:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:58.015 05:59:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.OHLfWoflCa == \/\t\m\p\/\t\m\p\.\O\H\L\f\W\o\f\l\C\a ]] 00:43:58.015 05:59:57 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:43:58.015 05:59:57 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:43:58.015 05:59:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:58.015 05:59:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:58.015 05:59:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:58.293 05:59:58 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.WcJwKUmC02 == \/\t\m\p\/\t\m\p\.\W\c\J\w\K\U\m\C\0\2 ]] 00:43:58.293 05:59:58 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:43:58.293 05:59:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:58.293 05:59:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:58.293 05:59:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:58.293 05:59:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:58.293 05:59:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:58.572 05:59:58 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:58.572 05:59:58 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:43:58.572 05:59:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:58.572 05:59:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:58.572 05:59:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:58.572 05:59:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:58.572 05:59:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:58.572 05:59:58 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:43:58.572 05:59:58 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:58.572 05:59:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:58.839 [2024-12-13 05:59:58.711505] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:58.839 nvme0n1 00:43:58.839 05:59:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:43:58.839 05:59:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:58.839 05:59:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:58.839 05:59:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:58.839 05:59:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:58.839 05:59:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:59.097 05:59:58 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:43:59.097 05:59:58 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:43:59.097 05:59:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:59.097 05:59:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:59.097 05:59:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:59.097 05:59:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:59.097 05:59:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:59.355 05:59:59 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:43:59.355 05:59:59 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:59.355 Running I/O for 1 seconds... 00:44:00.289 19111.00 IOPS, 74.65 MiB/s 00:44:00.289 Latency(us) 00:44:00.289 [2024-12-13T05:00:00.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:00.289 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:00.289 nvme0n1 : 1.00 19158.17 74.84 0.00 0.00 6669.18 4181.82 14168.26 00:44:00.289 [2024-12-13T05:00:00.304Z] =================================================================================================================== 00:44:00.289 [2024-12-13T05:00:00.304Z] Total : 19158.17 74.84 0.00 0.00 6669.18 4181.82 14168.26 00:44:00.289 { 00:44:00.289 "results": [ 00:44:00.289 { 00:44:00.289 "job": "nvme0n1", 00:44:00.289 "core_mask": "0x2", 00:44:00.289 "workload": "randrw", 00:44:00.289 "percentage": 50, 00:44:00.289 "status": "finished", 00:44:00.289 "queue_depth": 128, 00:44:00.289 "io_size": 4096, 00:44:00.289 "runtime": 1.004219, 00:44:00.289 "iops": 19158.171673708624, 00:44:00.289 "mibps": 74.83660810042431, 00:44:00.289 "io_failed": 0, 00:44:00.289 "io_timeout": 0, 00:44:00.289 "avg_latency_us": 6669.17800677691, 00:44:00.289 "min_latency_us": 4181.820952380953, 00:44:00.289 "max_latency_us": 14168.259047619047 00:44:00.289 } 00:44:00.289 ], 00:44:00.289 "core_count": 1 00:44:00.289 } 00:44:00.547 06:00:00 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:00.547 06:00:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:00.547 06:00:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:00.547 06:00:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:00.547 06:00:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:00.547 06:00:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:00.547 06:00:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:00.547 06:00:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:00.805 06:00:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:00.805 06:00:00 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:00.805 06:00:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:00.805 06:00:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:00.805 06:00:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:00.805 06:00:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:00.805 06:00:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.063 06:00:00 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:01.063 06:00:00 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:01.063 06:00:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:01.063 06:00:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:01.063 06:00:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:01.063 06:00:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:01.063 06:00:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:01.063 06:00:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:01.063 06:00:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:01.063 06:00:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:01.321 [2024-12-13 06:00:01.081242] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:01.321 [2024-12-13 06:00:01.081825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c5950 (107): Transport endpoint is not connected 00:44:01.321 [2024-12-13 06:00:01.082818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c5950 (9): Bad file descriptor 00:44:01.321 [2024-12-13 06:00:01.083819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:01.321 [2024-12-13 06:00:01.083829] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:01.321 [2024-12-13 06:00:01.083837] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:01.321 [2024-12-13 06:00:01.083850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:01.321 request: 00:44:01.321 { 00:44:01.321 "name": "nvme0", 00:44:01.321 "trtype": "tcp", 00:44:01.321 "traddr": "127.0.0.1", 00:44:01.321 "adrfam": "ipv4", 00:44:01.321 "trsvcid": "4420", 00:44:01.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:01.321 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:01.321 "prchk_reftag": false, 00:44:01.321 "prchk_guard": false, 00:44:01.321 "hdgst": false, 00:44:01.321 "ddgst": false, 00:44:01.321 "psk": "key1", 00:44:01.321 "allow_unrecognized_csi": false, 00:44:01.321 "method": "bdev_nvme_attach_controller", 00:44:01.321 "req_id": 1 00:44:01.321 } 00:44:01.321 Got JSON-RPC error response 00:44:01.321 response: 00:44:01.321 { 00:44:01.321 "code": -5, 00:44:01.321 "message": "Input/output error" 00:44:01.321 } 00:44:01.321 06:00:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:01.321 06:00:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:01.321 06:00:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:01.321 06:00:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:01.321 06:00:01 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:01.321 06:00:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:01.321 06:00:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:01.321 06:00:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:01.321 06:00:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:01.321 06:00:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.321 06:00:01 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:01.321 06:00:01 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:01.321 06:00:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:01.321 06:00:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:01.321 06:00:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:01.321 06:00:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:01.321 06:00:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.579 06:00:01 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:01.579 06:00:01 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:01.579 06:00:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:01.836 06:00:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:01.836 06:00:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:02.094 06:00:01 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:02.094 06:00:01 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:02.094 06:00:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:02.094 06:00:02 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:02.094 06:00:02 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.OHLfWoflCa 00:44:02.094 06:00:02 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.OHLfWoflCa 00:44:02.094 06:00:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:02.094 06:00:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.OHLfWoflCa 00:44:02.094 06:00:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:02.094 06:00:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:02.094 06:00:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:02.094 06:00:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:02.094 06:00:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OHLfWoflCa 00:44:02.094 06:00:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OHLfWoflCa 00:44:02.352 [2024-12-13 06:00:02.221875] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.OHLfWoflCa': 0100660 00:44:02.352 [2024-12-13 06:00:02.221900] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:02.352 request: 00:44:02.352 { 00:44:02.352 "name": "key0", 00:44:02.352 "path": "/tmp/tmp.OHLfWoflCa", 00:44:02.352 "method": "keyring_file_add_key", 00:44:02.352 "req_id": 1 00:44:02.352 } 00:44:02.352 Got JSON-RPC error response 00:44:02.352 response: 00:44:02.352 { 00:44:02.352 "code": -1, 00:44:02.352 "message": "Operation not permitted" 00:44:02.352 } 00:44:02.352 06:00:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:02.352 06:00:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:02.352 06:00:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:02.352 06:00:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:02.352 06:00:02 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.OHLfWoflCa 00:44:02.352 06:00:02 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OHLfWoflCa 00:44:02.352 06:00:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OHLfWoflCa 00:44:02.610 06:00:02 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.OHLfWoflCa 00:44:02.610 06:00:02 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:02.610 06:00:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:02.610 06:00:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:02.610 06:00:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:02.610 06:00:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:02.610 06:00:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:02.868 06:00:02 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:02.868 06:00:02 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:02.868 06:00:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:02.868 06:00:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:02.868 06:00:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:02.868 06:00:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:02.868 06:00:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:02.868 06:00:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:02.868 06:00:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:02.868 06:00:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:02.868 [2024-12-13 06:00:02.831484] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.OHLfWoflCa': No such file or directory 00:44:02.868 [2024-12-13 06:00:02.831508] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:02.868 [2024-12-13 06:00:02.831523] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:02.868 [2024-12-13 06:00:02.831530] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:02.868 [2024-12-13 06:00:02.831568] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:02.868 [2024-12-13 06:00:02.831585] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:02.868 request: 00:44:02.868 { 00:44:02.868 "name": "nvme0", 00:44:02.868 "trtype": "tcp", 00:44:02.868 "traddr": "127.0.0.1", 00:44:02.868 "adrfam": "ipv4", 00:44:02.868 "trsvcid": "4420", 00:44:02.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:02.868 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:02.868 "prchk_reftag": false, 00:44:02.868 "prchk_guard": false, 00:44:02.868 "hdgst": false, 00:44:02.868 "ddgst": false, 00:44:02.868 "psk": "key0", 00:44:02.868 "allow_unrecognized_csi": false, 00:44:02.868 "method": "bdev_nvme_attach_controller", 00:44:02.868 "req_id": 1 00:44:02.868 } 00:44:02.868 Got JSON-RPC error response 00:44:02.868 response: 00:44:02.868 { 00:44:02.868 "code": -19, 00:44:02.868 "message": "No such device" 00:44:02.868 } 00:44:02.868 06:00:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:02.868 06:00:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:02.868 06:00:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:02.868 06:00:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:02.868 06:00:02 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:02.868 06:00:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:03.126 06:00:03 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:03.126 06:00:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:03.126 06:00:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:03.126 06:00:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:03.126 06:00:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:03.126 06:00:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:03.126 06:00:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5FyHJEHZm6 00:44:03.126 06:00:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:03.126 06:00:03 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:03.126 06:00:03 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:03.126 06:00:03 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:03.126 06:00:03 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:03.126 06:00:03 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:03.126 06:00:03 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:03.126 06:00:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5FyHJEHZm6 00:44:03.126 06:00:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5FyHJEHZm6 00:44:03.126 06:00:03 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.5FyHJEHZm6 00:44:03.126 06:00:03 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5FyHJEHZm6 00:44:03.126 06:00:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5FyHJEHZm6 00:44:03.384 06:00:03 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:03.384 06:00:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:03.641 nvme0n1 00:44:03.641 06:00:03 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:03.641 06:00:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:03.641 06:00:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:03.641 06:00:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:03.641 06:00:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:03.641 06:00:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:03.898 06:00:03 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:03.898 06:00:03 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:03.898 06:00:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:04.155 06:00:03 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:04.155 06:00:03 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:04.155 06:00:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:04.155 06:00:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:04.155 06:00:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:04.155 06:00:04 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:04.155 06:00:04 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:04.155 06:00:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:04.155 06:00:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:04.155 06:00:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:04.155 06:00:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:04.155 06:00:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:04.413 06:00:04 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:04.413 06:00:04 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:04.413 06:00:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:04.670 06:00:04 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:04.670 06:00:04 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:04.670 06:00:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:04.928 06:00:04 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:04.928 06:00:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5FyHJEHZm6 00:44:04.928 06:00:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5FyHJEHZm6 00:44:05.185 06:00:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WcJwKUmC02 00:44:05.185 06:00:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WcJwKUmC02 00:44:05.185 06:00:05 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:05.185 06:00:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:05.442 nvme0n1 00:44:05.442 06:00:05 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:05.442 06:00:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:05.700 06:00:05 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:05.700 "subsystems": [ 00:44:05.700 { 00:44:05.700 "subsystem": "keyring", 00:44:05.700 "config": [ 00:44:05.700 { 00:44:05.700 "method": "keyring_file_add_key", 00:44:05.700 "params": { 00:44:05.700 "name": "key0", 00:44:05.700 "path": "/tmp/tmp.5FyHJEHZm6" 00:44:05.700 } 00:44:05.700 }, 00:44:05.700 { 00:44:05.700 "method": "keyring_file_add_key", 00:44:05.700 "params": { 00:44:05.700 "name": "key1", 00:44:05.700 "path": "/tmp/tmp.WcJwKUmC02" 00:44:05.700 } 00:44:05.700 } 00:44:05.700 ] 00:44:05.700 }, 00:44:05.700 { 00:44:05.700 "subsystem": "iobuf", 00:44:05.700 "config": [ 00:44:05.700 { 00:44:05.700 "method": "iobuf_set_options", 00:44:05.700 "params": { 00:44:05.700 "small_pool_count": 8192, 00:44:05.700 "large_pool_count": 1024, 00:44:05.700 "small_bufsize": 8192, 00:44:05.700 "large_bufsize": 135168, 00:44:05.700 "enable_numa": false 00:44:05.700 } 00:44:05.700 } 00:44:05.700 ] 00:44:05.700 }, 00:44:05.700 { 00:44:05.700 "subsystem": "sock", 00:44:05.700 "config": [ 00:44:05.700 { 00:44:05.700 "method": "sock_set_default_impl", 00:44:05.700 "params": { 00:44:05.700 "impl_name": "posix" 00:44:05.700 } 00:44:05.700 }, 00:44:05.700 { 00:44:05.700 "method": "sock_impl_set_options", 00:44:05.700 "params": { 00:44:05.700 "impl_name": "ssl", 00:44:05.700 "recv_buf_size": 4096, 00:44:05.700 "send_buf_size": 4096, 00:44:05.700 "enable_recv_pipe": true, 00:44:05.700 "enable_quickack": false, 00:44:05.700 "enable_placement_id": 0, 00:44:05.700 "enable_zerocopy_send_server": true, 00:44:05.700 "enable_zerocopy_send_client": false, 00:44:05.700 "zerocopy_threshold": 0, 00:44:05.700 "tls_version": 0, 00:44:05.700 "enable_ktls": false 00:44:05.700 } 00:44:05.700 }, 00:44:05.700 { 00:44:05.700 "method": "sock_impl_set_options", 00:44:05.700 "params": { 00:44:05.700 "impl_name": "posix", 00:44:05.700 "recv_buf_size": 2097152, 00:44:05.700 "send_buf_size": 2097152, 00:44:05.700 "enable_recv_pipe": true, 00:44:05.700 "enable_quickack": false, 00:44:05.700 "enable_placement_id": 0, 00:44:05.700 "enable_zerocopy_send_server": true, 00:44:05.700 "enable_zerocopy_send_client": false, 00:44:05.700 "zerocopy_threshold": 0, 00:44:05.700 "tls_version": 0, 00:44:05.700 "enable_ktls": false 00:44:05.700 } 00:44:05.700 } 00:44:05.700 ] 00:44:05.700 }, 00:44:05.700 { 00:44:05.700 "subsystem": "vmd", 00:44:05.700 "config": [] 00:44:05.700 }, 00:44:05.700 { 00:44:05.700 "subsystem": "accel", 00:44:05.700 "config": [ 00:44:05.700 { 00:44:05.700 "method": "accel_set_options", 00:44:05.700 "params": { 00:44:05.700 "small_cache_size": 128, 00:44:05.700 "large_cache_size": 16, 00:44:05.700 "task_count": 2048, 00:44:05.700 "sequence_count": 2048, 00:44:05.700 "buf_count": 2048 00:44:05.700 } 00:44:05.700 } 00:44:05.700 ] 00:44:05.700 }, 00:44:05.700 { 00:44:05.700 "subsystem": "bdev", 00:44:05.700 "config": [ 00:44:05.700 { 00:44:05.700 "method": "bdev_set_options", 00:44:05.700 "params": { 00:44:05.700 "bdev_io_pool_size": 65535, 00:44:05.700 "bdev_io_cache_size": 256, 00:44:05.700 "bdev_auto_examine": true, 00:44:05.700 "iobuf_small_cache_size": 128, 00:44:05.700 "iobuf_large_cache_size": 16 00:44:05.700 } 00:44:05.700 }, 00:44:05.700 { 00:44:05.700 "method": "bdev_raid_set_options", 00:44:05.700 "params": { 00:44:05.700 "process_window_size_kb": 1024, 00:44:05.700 "process_max_bandwidth_mb_sec": 0 00:44:05.700 } 00:44:05.700 }, 00:44:05.700 { 00:44:05.700 "method": "bdev_iscsi_set_options", 00:44:05.700 "params": { 00:44:05.700 "timeout_sec": 30 00:44:05.700 } 00:44:05.700 }, 00:44:05.700 { 00:44:05.700 "method": "bdev_nvme_set_options", 00:44:05.700 "params": { 00:44:05.700 "action_on_timeout": "none", 00:44:05.700 "timeout_us": 0, 00:44:05.700 "timeout_admin_us": 0, 00:44:05.700 "keep_alive_timeout_ms": 10000, 00:44:05.700 "arbitration_burst": 0, 00:44:05.700 "low_priority_weight": 0, 00:44:05.700 "medium_priority_weight": 0, 00:44:05.700 "high_priority_weight": 0, 00:44:05.700 "nvme_adminq_poll_period_us": 10000, 00:44:05.700 "nvme_ioq_poll_period_us": 0, 00:44:05.700 "io_queue_requests": 512, 00:44:05.700 "delay_cmd_submit": true, 00:44:05.700 "transport_retry_count": 4, 00:44:05.700 "bdev_retry_count": 3, 00:44:05.700 "transport_ack_timeout": 0, 00:44:05.700 "ctrlr_loss_timeout_sec": 0, 00:44:05.700 "reconnect_delay_sec": 0, 00:44:05.700 "fast_io_fail_timeout_sec": 0, 00:44:05.700 "disable_auto_failback": false, 00:44:05.700 "generate_uuids": false, 00:44:05.700 "transport_tos": 0, 00:44:05.700 "nvme_error_stat": false, 00:44:05.700 "rdma_srq_size": 0, 00:44:05.700 "io_path_stat": false, 00:44:05.700 "allow_accel_sequence": false, 00:44:05.700 "rdma_max_cq_size": 0, 00:44:05.700 "rdma_cm_event_timeout_ms": 0, 00:44:05.701 "dhchap_digests": [ 00:44:05.701 "sha256", 00:44:05.701 "sha384", 00:44:05.701 "sha512" 00:44:05.701 ], 00:44:05.701 "dhchap_dhgroups": [ 00:44:05.701 "null", 00:44:05.701 "ffdhe2048", 00:44:05.701 "ffdhe3072", 00:44:05.701 "ffdhe4096", 00:44:05.701 "ffdhe6144", 00:44:05.701 "ffdhe8192" 00:44:05.701 ], 00:44:05.701 "rdma_umr_per_io": false 00:44:05.701 } 00:44:05.701 }, 00:44:05.701 { 00:44:05.701 "method": "bdev_nvme_attach_controller", 00:44:05.701 "params": { 00:44:05.701 "name": "nvme0", 00:44:05.701 "trtype": "TCP", 00:44:05.701 "adrfam": "IPv4", 00:44:05.701 "traddr": "127.0.0.1", 00:44:05.701 "trsvcid": "4420", 00:44:05.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:05.701 "prchk_reftag": false, 00:44:05.701 "prchk_guard": false, 00:44:05.701 "ctrlr_loss_timeout_sec": 0, 00:44:05.701 "reconnect_delay_sec": 0, 00:44:05.701 "fast_io_fail_timeout_sec": 0, 00:44:05.701 "psk": "key0", 00:44:05.701 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:05.701 "hdgst": false, 00:44:05.701 "ddgst": false, 00:44:05.701 "multipath": "multipath" 00:44:05.701 } 00:44:05.701 }, 00:44:05.701 { 00:44:05.701 "method": "bdev_nvme_set_hotplug", 00:44:05.701 "params": { 00:44:05.701 "period_us": 100000, 00:44:05.701 "enable": false 00:44:05.701 } 00:44:05.701 }, 00:44:05.701 { 00:44:05.701 "method": "bdev_wait_for_examine" 00:44:05.701 } 00:44:05.701 ] 00:44:05.701 }, 00:44:05.701 { 00:44:05.701 "subsystem": "nbd", 00:44:05.701 "config": [] 00:44:05.701 } 00:44:05.701 ] 00:44:05.701 }' 00:44:05.701 06:00:05 keyring_file -- keyring/file.sh@115 -- # killprocess 658893 00:44:05.701 06:00:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 658893 ']' 00:44:05.701 06:00:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 658893 00:44:05.701 06:00:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:05.701 06:00:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:05.701 06:00:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 658893 00:44:05.701 06:00:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:05.701 06:00:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:05.701 06:00:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 658893' 00:44:05.701 killing process with pid 658893 00:44:05.701 06:00:05 keyring_file -- common/autotest_common.sh@973 -- # kill 658893 00:44:05.701 Received shutdown signal, test time was about 1.000000 seconds 00:44:05.701 00:44:05.701 Latency(us) 00:44:05.701 [2024-12-13T05:00:05.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:05.701 [2024-12-13T05:00:05.716Z] =================================================================================================================== 00:44:05.701 [2024-12-13T05:00:05.716Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:05.701 06:00:05 keyring_file -- common/autotest_common.sh@978 -- # wait 658893 00:44:05.959 06:00:05 keyring_file -- keyring/file.sh@118 -- # bperfpid=660507 00:44:05.959 06:00:05 keyring_file -- keyring/file.sh@120 -- # waitforlisten 660507 /var/tmp/bperf.sock 00:44:05.959 06:00:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 660507 ']' 00:44:05.959 06:00:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:05.959 06:00:05 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:05.959 06:00:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:05.959 06:00:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:05.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:05.959 06:00:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:05.959 06:00:05 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:05.959 "subsystems": [ 00:44:05.959 { 00:44:05.959 "subsystem": "keyring", 00:44:05.959 "config": [ 00:44:05.959 { 00:44:05.959 "method": "keyring_file_add_key", 00:44:05.959 "params": { 00:44:05.959 "name": "key0", 00:44:05.959 "path": "/tmp/tmp.5FyHJEHZm6" 00:44:05.959 } 00:44:05.959 }, 00:44:05.959 { 00:44:05.959 "method": "keyring_file_add_key", 00:44:05.959 "params": { 00:44:05.959 "name": "key1", 00:44:05.959 "path": "/tmp/tmp.WcJwKUmC02" 00:44:05.959 } 00:44:05.959 } 00:44:05.959 ] 00:44:05.959 }, 00:44:05.959 { 00:44:05.959 "subsystem": "iobuf", 00:44:05.959 "config": [ 00:44:05.959 { 00:44:05.959 "method": "iobuf_set_options", 00:44:05.959 "params": { 00:44:05.959 "small_pool_count": 8192, 00:44:05.959 "large_pool_count": 1024, 00:44:05.959 "small_bufsize": 8192, 00:44:05.959 "large_bufsize": 135168, 00:44:05.959 "enable_numa": false 00:44:05.959 } 00:44:05.959 } 00:44:05.959 ] 00:44:05.959 }, 00:44:05.959 { 00:44:05.959 "subsystem": "sock", 00:44:05.959 "config": [ 00:44:05.959 { 00:44:05.959 "method": "sock_set_default_impl", 00:44:05.959 "params": { 00:44:05.959 "impl_name": "posix" 00:44:05.959 } 00:44:05.959 }, 00:44:05.959 { 00:44:05.959 "method": "sock_impl_set_options", 00:44:05.959 "params": { 00:44:05.959 "impl_name": "ssl", 00:44:05.959 "recv_buf_size": 4096, 00:44:05.959 "send_buf_size": 4096, 00:44:05.959 "enable_recv_pipe": true, 00:44:05.959 "enable_quickack": false, 00:44:05.959 "enable_placement_id": 0, 00:44:05.959 "enable_zerocopy_send_server": true, 00:44:05.959 "enable_zerocopy_send_client": false, 00:44:05.959 "zerocopy_threshold": 0, 00:44:05.959 "tls_version": 0, 00:44:05.959 "enable_ktls": false 00:44:05.959 } 00:44:05.959 }, 00:44:05.959 { 00:44:05.959 "method": "sock_impl_set_options", 00:44:05.959 "params": { 00:44:05.959 "impl_name": "posix", 00:44:05.959 "recv_buf_size": 2097152, 00:44:05.959 "send_buf_size": 2097152, 00:44:05.959 "enable_recv_pipe": true, 00:44:05.959 "enable_quickack": false, 00:44:05.959 "enable_placement_id": 0, 00:44:05.959 "enable_zerocopy_send_server": true, 00:44:05.959 "enable_zerocopy_send_client": false, 00:44:05.959 "zerocopy_threshold": 0, 00:44:05.959 "tls_version": 0, 00:44:05.959 "enable_ktls": false 00:44:05.959 } 00:44:05.959 } 00:44:05.959 ] 00:44:05.959 }, 00:44:05.959 { 00:44:05.959 "subsystem": "vmd", 00:44:05.959 "config": [] 00:44:05.959 }, 00:44:05.959 { 00:44:05.959 "subsystem": "accel", 00:44:05.959 "config": [ 00:44:05.959 { 00:44:05.959 "method": "accel_set_options", 00:44:05.959 "params": { 00:44:05.959 "small_cache_size": 128, 00:44:05.959 "large_cache_size": 16, 00:44:05.959 "task_count": 2048, 00:44:05.959 "sequence_count": 2048, 00:44:05.959 "buf_count": 2048 00:44:05.959 } 00:44:05.959 } 00:44:05.959 ] 00:44:05.959 }, 00:44:05.959 { 00:44:05.959 "subsystem": "bdev", 00:44:05.959 "config": [ 00:44:05.959 { 00:44:05.959 "method": "bdev_set_options", 00:44:05.959 "params": { 00:44:05.959 "bdev_io_pool_size": 65535, 00:44:05.959 "bdev_io_cache_size": 256, 00:44:05.959 "bdev_auto_examine": true, 00:44:05.959 "iobuf_small_cache_size": 128, 00:44:05.959 "iobuf_large_cache_size": 16 00:44:05.959 } 00:44:05.959 }, 00:44:05.959 { 00:44:05.959 "method": "bdev_raid_set_options", 00:44:05.959 "params": { 00:44:05.959 "process_window_size_kb": 1024, 00:44:05.959 "process_max_bandwidth_mb_sec": 0 00:44:05.959 } 00:44:05.959 }, 00:44:05.959 { 00:44:05.959 "method": "bdev_iscsi_set_options", 00:44:05.959 "params": { 00:44:05.959 "timeout_sec": 30 00:44:05.959 } 00:44:05.959 }, 00:44:05.959 { 00:44:05.959 "method": "bdev_nvme_set_options", 00:44:05.959 "params": { 00:44:05.959 "action_on_timeout": "none", 00:44:05.959 "timeout_us": 0, 00:44:05.959 "timeout_admin_us": 0, 00:44:05.959 "keep_alive_timeout_ms": 10000, 00:44:05.960 "arbitration_burst": 0, 00:44:05.960 "low_priority_weight": 0, 00:44:05.960 "medium_priority_weight": 0, 00:44:05.960 "high_priority_weight": 0, 00:44:05.960 "nvme_adminq_poll_period_us": 10000, 00:44:05.960 "nvme_ioq_poll_period_us": 0, 00:44:05.960 "io_queue_requests": 512, 00:44:05.960 "delay_cmd_submit": true, 00:44:05.960 "transport_retry_count": 4, 00:44:05.960 "bdev_retry_count": 3, 00:44:05.960 "transport_ack_timeout": 0, 00:44:05.960 "ctrlr_loss_timeout_sec": 0, 00:44:05.960 "reconnect_delay_sec": 0, 00:44:05.960 "fast_io_fail_timeout_sec": 0, 00:44:05.960 "disable_auto_failback": false, 00:44:05.960 "generate_uuids": false, 00:44:05.960 "transport_tos": 0, 00:44:05.960 "nvme_error_stat": false, 00:44:05.960 "rdma_srq_size": 0, 00:44:05.960 "io_path_stat": false, 00:44:05.960 "allow_accel_sequence": false, 00:44:05.960 "rdma_max_cq_size": 0, 00:44:05.960 "rdma_cm_event_timeout_ms": 0, 00:44:05.960 "dhchap_digests": [ 00:44:05.960 "sha256", 00:44:05.960 "sha384", 00:44:05.960 "sha512" 00:44:05.960 ], 00:44:05.960 "dhchap_dhgroups": [ 00:44:05.960 "null", 00:44:05.960 "ffdhe2048", 00:44:05.960 "ffdhe3072", 00:44:05.960 "ffdhe4096", 00:44:05.960 "ffdhe6144", 00:44:05.960 "ffdhe8192" 00:44:05.960 ], 00:44:05.960 "rdma_umr_per_io": false 00:44:05.960 } 00:44:05.960 }, 00:44:05.960 { 00:44:05.960 "method": "bdev_nvme_attach_controller", 00:44:05.960 "params": { 00:44:05.960 "name": "nvme0", 00:44:05.960 "trtype": "TCP", 00:44:05.960 "adrfam": "IPv4", 00:44:05.960 "traddr": "127.0.0.1", 00:44:05.960 "trsvcid": "4420", 00:44:05.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:05.960 "prchk_reftag": false, 00:44:05.960 "prchk_guard": false, 00:44:05.960 "ctrlr_loss_timeout_sec": 0, 00:44:05.960 "reconnect_delay_sec": 0, 00:44:05.960 "fast_io_fail_timeout_sec": 0, 00:44:05.960 "psk": "key0", 00:44:05.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:05.960 "hdgst": false, 00:44:05.960 "ddgst": false, 00:44:05.960 "multipath": "multipath" 00:44:05.960 } 00:44:05.960 }, 00:44:05.960 { 00:44:05.960 "method": "bdev_nvme_set_hotplug", 00:44:05.960 "params": { 00:44:05.960 "period_us": 100000, 00:44:05.960 "enable": false 00:44:05.960 } 00:44:05.960 }, 00:44:05.960 { 00:44:05.960 "method": "bdev_wait_for_examine" 00:44:05.960 } 00:44:05.960 ] 00:44:05.960 }, 00:44:05.960 { 00:44:05.960 "subsystem": "nbd", 00:44:05.960 "config": [] 00:44:05.960 } 00:44:05.960 ] 00:44:05.960 }' 00:44:05.960 06:00:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:05.960 [2024-12-13 06:00:05.859096] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:05.960 [2024-12-13 06:00:05.859142] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660507 ] 00:44:05.960 [2024-12-13 06:00:05.933167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:05.960 [2024-12-13 06:00:05.955337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:06.217 [2024-12-13 06:00:06.110717] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:06.782 06:00:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:06.782 06:00:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:06.782 06:00:06 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:06.782 06:00:06 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:06.782 06:00:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:07.039 06:00:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:07.039 06:00:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:07.039 06:00:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:07.039 06:00:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:07.039 06:00:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:07.039 06:00:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:07.039 06:00:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:07.296 06:00:07 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:07.296 06:00:07 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:07.296 06:00:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:07.296 06:00:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:07.296 06:00:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:07.296 06:00:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:07.296 06:00:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:07.553 06:00:07 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:07.553 06:00:07 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:07.553 06:00:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:07.553 06:00:07 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:07.553 06:00:07 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:07.553 06:00:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:07.553 06:00:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.5FyHJEHZm6 /tmp/tmp.WcJwKUmC02 00:44:07.553 06:00:07 keyring_file -- keyring/file.sh@20 -- # killprocess 660507 00:44:07.553 06:00:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 660507 ']' 00:44:07.553 06:00:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 660507 00:44:07.553 06:00:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:07.553 06:00:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:07.553 06:00:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 660507 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 660507' 00:44:07.812 killing process with pid 660507 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@973 -- # kill 660507 00:44:07.812 Received shutdown signal, test time was about 1.000000 seconds 00:44:07.812 00:44:07.812 Latency(us) 00:44:07.812 [2024-12-13T05:00:07.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:07.812 [2024-12-13T05:00:07.827Z] =================================================================================================================== 00:44:07.812 [2024-12-13T05:00:07.827Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@978 -- # wait 660507 00:44:07.812 06:00:07 keyring_file -- keyring/file.sh@21 -- # killprocess 658887 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 658887 ']' 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 658887 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 658887 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 658887' 00:44:07.812 killing process with pid 658887 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@973 -- # kill 658887 00:44:07.812 06:00:07 keyring_file -- common/autotest_common.sh@978 -- # wait 658887 00:44:08.070 00:44:08.070 real 0m11.664s 00:44:08.070 user 0m29.093s 00:44:08.070 sys 0m2.606s 00:44:08.070 06:00:08 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:08.070 06:00:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:08.070 ************************************ 00:44:08.070 END TEST keyring_file 00:44:08.070 ************************************ 00:44:08.329 06:00:08 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:08.329 06:00:08 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:08.329 06:00:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:08.329 06:00:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:08.329 06:00:08 -- common/autotest_common.sh@10 -- # set +x 00:44:08.329 ************************************ 00:44:08.329 START TEST keyring_linux 00:44:08.329 ************************************ 00:44:08.329 06:00:08 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:08.329 Joined session keyring: 111189092 00:44:08.329 * Looking for test storage... 00:44:08.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:08.329 06:00:08 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:08.329 06:00:08 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:44:08.329 06:00:08 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:08.329 06:00:08 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:08.329 06:00:08 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:08.329 06:00:08 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:08.329 06:00:08 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:08.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:08.329 --rc genhtml_branch_coverage=1 00:44:08.329 --rc genhtml_function_coverage=1 00:44:08.329 --rc genhtml_legend=1 00:44:08.329 --rc geninfo_all_blocks=1 00:44:08.329 --rc geninfo_unexecuted_blocks=1 00:44:08.329 00:44:08.329 ' 00:44:08.329 06:00:08 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:08.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:08.329 --rc genhtml_branch_coverage=1 00:44:08.329 --rc genhtml_function_coverage=1 00:44:08.329 --rc genhtml_legend=1 00:44:08.329 --rc geninfo_all_blocks=1 00:44:08.329 --rc geninfo_unexecuted_blocks=1 00:44:08.329 00:44:08.329 ' 00:44:08.329 06:00:08 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:08.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:08.329 --rc genhtml_branch_coverage=1 00:44:08.329 --rc genhtml_function_coverage=1 00:44:08.329 --rc genhtml_legend=1 00:44:08.329 --rc geninfo_all_blocks=1 00:44:08.329 --rc geninfo_unexecuted_blocks=1 00:44:08.329 00:44:08.329 ' 00:44:08.329 06:00:08 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:08.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:08.329 --rc genhtml_branch_coverage=1 00:44:08.329 --rc genhtml_function_coverage=1 00:44:08.329 --rc genhtml_legend=1 00:44:08.329 --rc geninfo_all_blocks=1 00:44:08.329 --rc geninfo_unexecuted_blocks=1 00:44:08.329 00:44:08.329 ' 00:44:08.329 06:00:08 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:08.329 06:00:08 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:08.329 06:00:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:08.329 06:00:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:08.329 06:00:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:08.329 06:00:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:08.329 06:00:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:08.329 06:00:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:08.329 06:00:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:08.329 06:00:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:08.329 06:00:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:08.329 06:00:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:08.329 06:00:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:08.588 06:00:08 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:08.588 06:00:08 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:08.588 06:00:08 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:08.588 06:00:08 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:08.588 06:00:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:08.588 06:00:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:08.588 06:00:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:08.588 06:00:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:08.588 06:00:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:08.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:08.588 06:00:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:08.588 06:00:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:08.588 06:00:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:08.588 06:00:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:08.588 06:00:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:08.588 06:00:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:08.588 /tmp/:spdk-test:key0 00:44:08.588 06:00:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:08.588 06:00:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:08.588 06:00:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:08.588 /tmp/:spdk-test:key1 00:44:08.588 06:00:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=661045 00:44:08.588 06:00:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 661045 00:44:08.588 06:00:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:08.588 06:00:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 661045 ']' 00:44:08.588 06:00:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:08.589 06:00:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:08.589 06:00:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:08.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:08.589 06:00:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:08.589 06:00:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:08.589 [2024-12-13 06:00:08.501744] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:08.589 [2024-12-13 06:00:08.501793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661045 ] 00:44:08.589 [2024-12-13 06:00:08.575067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:08.589 [2024-12-13 06:00:08.597822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:08.847 06:00:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:08.847 06:00:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:08.847 06:00:08 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:08.847 06:00:08 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:08.847 06:00:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:08.847 [2024-12-13 06:00:08.804280] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:08.847 null0 00:44:08.847 [2024-12-13 06:00:08.836326] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:08.847 [2024-12-13 06:00:08.836629] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:08.847 06:00:08 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:08.847 06:00:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:08.847 713152928 00:44:08.847 06:00:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:08.847 406274907 00:44:09.105 06:00:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=661193 00:44:09.105 06:00:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 661193 /var/tmp/bperf.sock 00:44:09.105 06:00:08 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:09.105 06:00:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 661193 ']' 00:44:09.105 06:00:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:09.105 06:00:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:09.105 06:00:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:09.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:09.105 06:00:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:09.105 06:00:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:09.105 [2024-12-13 06:00:08.908682] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:09.105 [2024-12-13 06:00:08.908726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661193 ] 00:44:09.105 [2024-12-13 06:00:08.983309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:09.105 [2024-12-13 06:00:09.005583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:09.105 06:00:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:09.105 06:00:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:09.105 06:00:09 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:09.105 06:00:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:09.362 06:00:09 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:09.362 06:00:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:09.624 06:00:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:09.624 06:00:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:09.624 [2024-12-13 06:00:09.638094] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:09.886 nvme0n1 00:44:09.886 06:00:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:09.886 06:00:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:09.886 06:00:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:09.886 06:00:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:09.886 06:00:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:09.886 06:00:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:10.143 06:00:09 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:10.143 06:00:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:10.143 06:00:09 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:10.143 06:00:09 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:10.143 06:00:09 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:10.143 06:00:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:10.143 06:00:09 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:10.143 06:00:10 keyring_linux -- keyring/linux.sh@25 -- # sn=713152928 00:44:10.143 06:00:10 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:10.143 06:00:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:10.143 06:00:10 keyring_linux -- keyring/linux.sh@26 -- # [[ 713152928 == \7\1\3\1\5\2\9\2\8 ]] 00:44:10.143 06:00:10 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 713152928 00:44:10.143 06:00:10 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:10.143 06:00:10 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:10.401 Running I/O for 1 seconds... 00:44:11.334 21170.00 IOPS, 82.70 MiB/s 00:44:11.334 Latency(us) 00:44:11.334 [2024-12-13T05:00:11.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:11.334 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:11.334 nvme0n1 : 1.01 21173.59 82.71 0.00 0.00 6025.56 4618.73 10423.34 00:44:11.334 [2024-12-13T05:00:11.349Z] =================================================================================================================== 00:44:11.334 [2024-12-13T05:00:11.349Z] Total : 21173.59 82.71 0.00 0.00 6025.56 4618.73 10423.34 00:44:11.334 { 00:44:11.334 "results": [ 00:44:11.334 { 00:44:11.334 "job": "nvme0n1", 00:44:11.334 "core_mask": "0x2", 00:44:11.334 "workload": "randread", 00:44:11.334 "status": "finished", 00:44:11.334 "queue_depth": 128, 00:44:11.334 "io_size": 4096, 00:44:11.334 "runtime": 1.005923, 00:44:11.334 "iops": 21173.588833340127, 00:44:11.334 "mibps": 82.70933138023487, 00:44:11.334 "io_failed": 0, 00:44:11.334 "io_timeout": 0, 00:44:11.334 "avg_latency_us": 6025.5584116401615, 00:44:11.334 "min_latency_us": 4618.727619047619, 00:44:11.334 "max_latency_us": 10423.344761904762 00:44:11.334 } 00:44:11.334 ], 00:44:11.334 "core_count": 1 00:44:11.334 } 00:44:11.334 06:00:11 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:11.334 06:00:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:11.592 06:00:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:11.592 06:00:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:11.592 06:00:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:11.592 06:00:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:11.592 06:00:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:11.592 06:00:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:11.850 06:00:11 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:11.850 06:00:11 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:11.850 06:00:11 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:11.850 06:00:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:11.850 06:00:11 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:11.850 06:00:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:11.850 06:00:11 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:11.850 06:00:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:11.850 [2024-12-13 06:00:11.825038] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:11.850 [2024-12-13 06:00:11.826006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1117700 (107): Transport endpoint is not connected 00:44:11.850 [2024-12-13 06:00:11.827000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1117700 (9): Bad file descriptor 00:44:11.850 [2024-12-13 06:00:11.828002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:11.850 [2024-12-13 06:00:11.828011] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:11.850 [2024-12-13 06:00:11.828018] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:11.850 [2024-12-13 06:00:11.828025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:11.850 request: 00:44:11.850 { 00:44:11.850 "name": "nvme0", 00:44:11.850 "trtype": "tcp", 00:44:11.850 "traddr": "127.0.0.1", 00:44:11.850 "adrfam": "ipv4", 00:44:11.850 "trsvcid": "4420", 00:44:11.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:11.850 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:11.850 "prchk_reftag": false, 00:44:11.850 "prchk_guard": false, 00:44:11.850 "hdgst": false, 00:44:11.850 "ddgst": false, 00:44:11.850 "psk": ":spdk-test:key1", 00:44:11.850 "allow_unrecognized_csi": false, 00:44:11.850 "method": "bdev_nvme_attach_controller", 00:44:11.850 "req_id": 1 00:44:11.850 } 00:44:11.850 Got JSON-RPC error response 00:44:11.850 response: 00:44:11.850 { 00:44:11.850 "code": -5, 00:44:11.850 "message": "Input/output error" 00:44:11.850 } 00:44:11.850 06:00:11 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:11.850 06:00:11 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:11.850 06:00:11 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:11.850 06:00:11 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@33 -- # sn=713152928 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 713152928 00:44:11.850 1 links removed 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@33 -- # sn=406274907 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 406274907 00:44:11.850 1 links removed 00:44:11.850 06:00:11 keyring_linux -- keyring/linux.sh@41 -- # killprocess 661193 00:44:11.850 06:00:11 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 661193 ']' 00:44:11.850 06:00:11 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 661193 00:44:12.108 06:00:11 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:12.109 06:00:11 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:12.109 06:00:11 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 661193 00:44:12.109 06:00:11 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:12.109 06:00:11 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:12.109 06:00:11 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 661193' 00:44:12.109 killing process with pid 661193 00:44:12.109 06:00:11 keyring_linux -- common/autotest_common.sh@973 -- # kill 661193 00:44:12.109 Received shutdown signal, test time was about 1.000000 seconds 00:44:12.109 00:44:12.109 Latency(us) 00:44:12.109 [2024-12-13T05:00:12.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:12.109 [2024-12-13T05:00:12.124Z] =================================================================================================================== 00:44:12.109 [2024-12-13T05:00:12.124Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:12.109 06:00:11 keyring_linux -- common/autotest_common.sh@978 -- # wait 661193 00:44:12.109 06:00:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 661045 00:44:12.109 06:00:12 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 661045 ']' 00:44:12.109 06:00:12 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 661045 00:44:12.109 06:00:12 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:12.109 06:00:12 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:12.109 06:00:12 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 661045 00:44:12.109 06:00:12 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:12.109 06:00:12 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:12.109 06:00:12 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 661045' 00:44:12.109 killing process with pid 661045 00:44:12.109 06:00:12 keyring_linux -- common/autotest_common.sh@973 -- # kill 661045 00:44:12.109 06:00:12 keyring_linux -- common/autotest_common.sh@978 -- # wait 661045 00:44:12.676 00:44:12.676 real 0m4.272s 00:44:12.676 user 0m8.044s 00:44:12.676 sys 0m1.438s 00:44:12.676 06:00:12 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:12.676 06:00:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:12.676 ************************************ 00:44:12.676 END TEST keyring_linux 00:44:12.676 ************************************ 00:44:12.676 06:00:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:12.676 06:00:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:12.676 06:00:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:12.676 06:00:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:12.676 06:00:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:12.676 06:00:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:12.676 06:00:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:12.676 06:00:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:12.676 06:00:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:12.676 06:00:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:12.676 06:00:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:12.676 06:00:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:12.676 06:00:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:12.676 06:00:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:12.676 06:00:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:12.676 06:00:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:12.676 06:00:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:12.676 06:00:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:12.676 06:00:12 -- common/autotest_common.sh@10 -- # set +x 00:44:12.676 06:00:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:12.676 06:00:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:12.676 06:00:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:12.676 06:00:12 -- common/autotest_common.sh@10 -- # set +x 00:44:17.954 INFO: APP EXITING 00:44:17.954 INFO: killing all VMs 00:44:17.954 INFO: killing vhost app 00:44:17.954 INFO: EXIT DONE 00:44:21.241 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:44:21.241 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:44:21.241 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:44:23.849 Cleaning 00:44:23.849 Removing: /var/run/dpdk/spdk0/config 00:44:23.849 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:23.849 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:23.849 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:23.849 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:23.849 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:23.849 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:23.849 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:23.849 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:23.849 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:23.849 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:23.849 Removing: /var/run/dpdk/spdk1/config 00:44:23.849 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:23.849 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:23.849 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:23.849 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:23.849 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:23.849 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:23.849 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:23.849 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:23.849 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:23.849 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:23.849 Removing: /var/run/dpdk/spdk2/config 00:44:23.849 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:23.849 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:23.849 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:23.849 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:23.849 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:23.849 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:23.849 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:23.849 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:23.849 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:23.849 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:23.849 Removing: /var/run/dpdk/spdk3/config 00:44:23.849 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:23.850 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:23.850 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:23.850 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:23.850 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:23.850 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:23.850 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:23.850 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:23.850 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:23.850 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:24.109 Removing: /var/run/dpdk/spdk4/config 00:44:24.109 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:24.109 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:24.109 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:24.109 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:24.109 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:24.109 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:24.109 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:24.109 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:24.109 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:24.109 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:24.109 Removing: /dev/shm/bdev_svc_trace.1 00:44:24.109 Removing: /dev/shm/nvmf_trace.0 00:44:24.109 Removing: /dev/shm/spdk_tgt_trace.pid104262 00:44:24.109 Removing: /var/run/dpdk/spdk0 00:44:24.109 Removing: /var/run/dpdk/spdk1 00:44:24.109 Removing: /var/run/dpdk/spdk2 00:44:24.109 Removing: /var/run/dpdk/spdk3 00:44:24.109 Removing: /var/run/dpdk/spdk4 00:44:24.109 Removing: /var/run/dpdk/spdk_pid102185 00:44:24.109 Removing: /var/run/dpdk/spdk_pid103207 00:44:24.109 Removing: /var/run/dpdk/spdk_pid104262 00:44:24.109 Removing: /var/run/dpdk/spdk_pid104885 00:44:24.109 Removing: /var/run/dpdk/spdk_pid105813 00:44:24.109 Removing: /var/run/dpdk/spdk_pid105921 00:44:24.109 Removing: /var/run/dpdk/spdk_pid106960 00:44:24.109 Removing: /var/run/dpdk/spdk_pid106994 00:44:24.109 Removing: /var/run/dpdk/spdk_pid107342 00:44:24.109 Removing: /var/run/dpdk/spdk_pid108829 00:44:24.109 Removing: /var/run/dpdk/spdk_pid110192 00:44:24.109 Removing: /var/run/dpdk/spdk_pid110478 00:44:24.109 Removing: /var/run/dpdk/spdk_pid110761 00:44:24.109 Removing: /var/run/dpdk/spdk_pid111057 00:44:24.109 Removing: /var/run/dpdk/spdk_pid111271 00:44:24.109 Removing: /var/run/dpdk/spdk_pid111685 00:44:24.109 Removing: /var/run/dpdk/spdk_pid112009 00:44:24.109 Removing: /var/run/dpdk/spdk_pid112307 00:44:24.109 Removing: /var/run/dpdk/spdk_pid113051 00:44:24.109 Removing: /var/run/dpdk/spdk_pid116150 00:44:24.109 Removing: /var/run/dpdk/spdk_pid116358 00:44:24.109 Removing: /var/run/dpdk/spdk_pid116539 00:44:24.109 Removing: /var/run/dpdk/spdk_pid116662 00:44:24.109 Removing: /var/run/dpdk/spdk_pid116956 00:44:24.109 Removing: /var/run/dpdk/spdk_pid117141 00:44:24.109 Removing: /var/run/dpdk/spdk_pid117495 00:44:24.109 Removing: /var/run/dpdk/spdk_pid117629 00:44:24.109 Removing: /var/run/dpdk/spdk_pid117886 00:44:24.109 Removing: /var/run/dpdk/spdk_pid117900 00:44:24.109 Removing: /var/run/dpdk/spdk_pid118150 00:44:24.109 Removing: /var/run/dpdk/spdk_pid118155 00:44:24.109 Removing: /var/run/dpdk/spdk_pid118702 00:44:24.109 Removing: /var/run/dpdk/spdk_pid118951 00:44:24.109 Removing: /var/run/dpdk/spdk_pid119246 00:44:24.109 Removing: /var/run/dpdk/spdk_pid122890 00:44:24.109 Removing: /var/run/dpdk/spdk_pid127286 00:44:24.109 Removing: /var/run/dpdk/spdk_pid137188 00:44:24.109 Removing: /var/run/dpdk/spdk_pid137832 00:44:24.109 Removing: /var/run/dpdk/spdk_pid142028 00:44:24.109 Removing: /var/run/dpdk/spdk_pid142464 00:44:24.109 Removing: /var/run/dpdk/spdk_pid146666 00:44:24.109 Removing: /var/run/dpdk/spdk_pid152437 00:44:24.109 Removing: /var/run/dpdk/spdk_pid155141 00:44:24.109 Removing: /var/run/dpdk/spdk_pid165663 00:44:24.109 Removing: /var/run/dpdk/spdk_pid174456 00:44:24.109 Removing: /var/run/dpdk/spdk_pid176237 00:44:24.109 Removing: /var/run/dpdk/spdk_pid177142 00:44:24.368 Removing: /var/run/dpdk/spdk_pid193820 00:44:24.368 Removing: /var/run/dpdk/spdk_pid197878 00:44:24.368 Removing: /var/run/dpdk/spdk_pid279082 00:44:24.368 Removing: /var/run/dpdk/spdk_pid284382 00:44:24.368 Removing: /var/run/dpdk/spdk_pid290375 00:44:24.368 Removing: /var/run/dpdk/spdk_pid296770 00:44:24.368 Removing: /var/run/dpdk/spdk_pid296865 00:44:24.368 Removing: /var/run/dpdk/spdk_pid297589 00:44:24.368 Removing: /var/run/dpdk/spdk_pid298446 00:44:24.368 Removing: /var/run/dpdk/spdk_pid299348 00:44:24.368 Removing: /var/run/dpdk/spdk_pid299884 00:44:24.368 Removing: /var/run/dpdk/spdk_pid300018 00:44:24.368 Removing: /var/run/dpdk/spdk_pid300255 00:44:24.368 Removing: /var/run/dpdk/spdk_pid300276 00:44:24.368 Removing: /var/run/dpdk/spdk_pid300411 00:44:24.368 Removing: /var/run/dpdk/spdk_pid301193 00:44:24.368 Removing: /var/run/dpdk/spdk_pid302076 00:44:24.368 Removing: /var/run/dpdk/spdk_pid302979 00:44:24.368 Removing: /var/run/dpdk/spdk_pid303528 00:44:24.368 Removing: /var/run/dpdk/spdk_pid303657 00:44:24.368 Removing: /var/run/dpdk/spdk_pid303888 00:44:24.368 Removing: /var/run/dpdk/spdk_pid304897 00:44:24.368 Removing: /var/run/dpdk/spdk_pid305854 00:44:24.368 Removing: /var/run/dpdk/spdk_pid313960 00:44:24.368 Removing: /var/run/dpdk/spdk_pid342812 00:44:24.368 Removing: /var/run/dpdk/spdk_pid347228 00:44:24.368 Removing: /var/run/dpdk/spdk_pid348848 00:44:24.368 Removing: /var/run/dpdk/spdk_pid350608 00:44:24.368 Removing: /var/run/dpdk/spdk_pid350837 00:44:24.368 Removing: /var/run/dpdk/spdk_pid350979 00:44:24.368 Removing: /var/run/dpdk/spdk_pid351084 00:44:24.368 Removing: /var/run/dpdk/spdk_pid351570 00:44:24.368 Removing: /var/run/dpdk/spdk_pid353353 00:44:24.368 Removing: /var/run/dpdk/spdk_pid354097 00:44:24.368 Removing: /var/run/dpdk/spdk_pid354583 00:44:24.368 Removing: /var/run/dpdk/spdk_pid357188 00:44:24.368 Removing: /var/run/dpdk/spdk_pid357617 00:44:24.368 Removing: /var/run/dpdk/spdk_pid358319 00:44:24.368 Removing: /var/run/dpdk/spdk_pid362294 00:44:24.368 Removing: /var/run/dpdk/spdk_pid367575 00:44:24.368 Removing: /var/run/dpdk/spdk_pid367577 00:44:24.368 Removing: /var/run/dpdk/spdk_pid367578 00:44:24.368 Removing: /var/run/dpdk/spdk_pid371424 00:44:24.368 Removing: /var/run/dpdk/spdk_pid375166 00:44:24.368 Removing: /var/run/dpdk/spdk_pid380039 00:44:24.368 Removing: /var/run/dpdk/spdk_pid415745 00:44:24.368 Removing: /var/run/dpdk/spdk_pid419811 00:44:24.368 Removing: /var/run/dpdk/spdk_pid425680 00:44:24.368 Removing: /var/run/dpdk/spdk_pid426954 00:44:24.368 Removing: /var/run/dpdk/spdk_pid428237 00:44:24.368 Removing: /var/run/dpdk/spdk_pid429525 00:44:24.368 Removing: /var/run/dpdk/spdk_pid434046 00:44:24.368 Removing: /var/run/dpdk/spdk_pid438367 00:44:24.368 Removing: /var/run/dpdk/spdk_pid442632 00:44:24.368 Removing: /var/run/dpdk/spdk_pid449971 00:44:24.368 Removing: /var/run/dpdk/spdk_pid450089 00:44:24.368 Removing: /var/run/dpdk/spdk_pid454506 00:44:24.368 Removing: /var/run/dpdk/spdk_pid454728 00:44:24.368 Removing: /var/run/dpdk/spdk_pid454948 00:44:24.368 Removing: /var/run/dpdk/spdk_pid455398 00:44:24.368 Removing: /var/run/dpdk/spdk_pid455404 00:44:24.368 Removing: /var/run/dpdk/spdk_pid456766 00:44:24.368 Removing: /var/run/dpdk/spdk_pid458477 00:44:24.368 Removing: /var/run/dpdk/spdk_pid460082 00:44:24.368 Removing: /var/run/dpdk/spdk_pid461639 00:44:24.368 Removing: /var/run/dpdk/spdk_pid463221 00:44:24.368 Removing: /var/run/dpdk/spdk_pid464959 00:44:24.368 Removing: /var/run/dpdk/spdk_pid470705 00:44:24.627 Removing: /var/run/dpdk/spdk_pid471271 00:44:24.627 Removing: /var/run/dpdk/spdk_pid472970 00:44:24.627 Removing: /var/run/dpdk/spdk_pid473989 00:44:24.627 Removing: /var/run/dpdk/spdk_pid479628 00:44:24.627 Removing: /var/run/dpdk/spdk_pid482764 00:44:24.627 Removing: /var/run/dpdk/spdk_pid488043 00:44:24.627 Removing: /var/run/dpdk/spdk_pid493289 00:44:24.627 Removing: /var/run/dpdk/spdk_pid501648 00:44:24.627 Removing: /var/run/dpdk/spdk_pid508502 00:44:24.627 Removing: /var/run/dpdk/spdk_pid508517 00:44:24.627 Removing: /var/run/dpdk/spdk_pid527496 00:44:24.627 Removing: /var/run/dpdk/spdk_pid528322 00:44:24.627 Removing: /var/run/dpdk/spdk_pid528939 00:44:24.627 Removing: /var/run/dpdk/spdk_pid529471 00:44:24.627 Removing: /var/run/dpdk/spdk_pid530164 00:44:24.627 Removing: /var/run/dpdk/spdk_pid530655 00:44:24.627 Removing: /var/run/dpdk/spdk_pid531113 00:44:24.627 Removing: /var/run/dpdk/spdk_pid531685 00:44:24.627 Removing: /var/run/dpdk/spdk_pid535753 00:44:24.627 Removing: /var/run/dpdk/spdk_pid535974 00:44:24.627 Removing: /var/run/dpdk/spdk_pid541943 00:44:24.627 Removing: /var/run/dpdk/spdk_pid541994 00:44:24.627 Removing: /var/run/dpdk/spdk_pid547371 00:44:24.627 Removing: /var/run/dpdk/spdk_pid551528 00:44:24.627 Removing: /var/run/dpdk/spdk_pid560912 00:44:24.627 Removing: /var/run/dpdk/spdk_pid561482 00:44:24.627 Removing: /var/run/dpdk/spdk_pid565656 00:44:24.627 Removing: /var/run/dpdk/spdk_pid565909 00:44:24.627 Removing: /var/run/dpdk/spdk_pid570018 00:44:24.627 Removing: /var/run/dpdk/spdk_pid576104 00:44:24.627 Removing: /var/run/dpdk/spdk_pid578486 00:44:24.627 Removing: /var/run/dpdk/spdk_pid588139 00:44:24.627 Removing: /var/run/dpdk/spdk_pid596704 00:44:24.627 Removing: /var/run/dpdk/spdk_pid598415 00:44:24.627 Removing: /var/run/dpdk/spdk_pid599288 00:44:24.627 Removing: /var/run/dpdk/spdk_pid614926 00:44:24.627 Removing: /var/run/dpdk/spdk_pid618671 00:44:24.627 Removing: /var/run/dpdk/spdk_pid621858 00:44:24.627 Removing: /var/run/dpdk/spdk_pid629510 00:44:24.627 Removing: /var/run/dpdk/spdk_pid629597 00:44:24.627 Removing: /var/run/dpdk/spdk_pid634547 00:44:24.627 Removing: /var/run/dpdk/spdk_pid636406 00:44:24.627 Removing: /var/run/dpdk/spdk_pid638169 00:44:24.627 Removing: /var/run/dpdk/spdk_pid639394 00:44:24.627 Removing: /var/run/dpdk/spdk_pid641313 00:44:24.627 Removing: /var/run/dpdk/spdk_pid642353 00:44:24.627 Removing: /var/run/dpdk/spdk_pid650909 00:44:24.627 Removing: /var/run/dpdk/spdk_pid651357 00:44:24.627 Removing: /var/run/dpdk/spdk_pid651809 00:44:24.627 Removing: /var/run/dpdk/spdk_pid654056 00:44:24.627 Removing: /var/run/dpdk/spdk_pid654597 00:44:24.627 Removing: /var/run/dpdk/spdk_pid655142 00:44:24.627 Removing: /var/run/dpdk/spdk_pid658887 00:44:24.627 Removing: /var/run/dpdk/spdk_pid658893 00:44:24.627 Removing: /var/run/dpdk/spdk_pid660507 00:44:24.627 Removing: /var/run/dpdk/spdk_pid661045 00:44:24.627 Removing: /var/run/dpdk/spdk_pid661193 00:44:24.627 Clean 00:44:24.885 06:00:24 -- common/autotest_common.sh@1453 -- # return 0 00:44:24.885 06:00:24 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:44:24.885 06:00:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:24.885 06:00:24 -- common/autotest_common.sh@10 -- # set +x 00:44:24.885 06:00:24 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:44:24.885 06:00:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:24.885 06:00:24 -- common/autotest_common.sh@10 -- # set +x 00:44:24.885 06:00:24 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:24.885 06:00:24 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:24.885 06:00:24 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:24.885 06:00:24 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:44:24.885 06:00:24 -- spdk/autotest.sh@398 -- # hostname 00:44:24.886 06:00:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:25.144 geninfo: WARNING: invalid characters removed from testname! 00:44:47.071 06:00:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:48.447 06:00:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:50.347 06:00:50 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:52.248 06:00:52 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:54.150 06:00:53 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:56.051 06:00:55 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:57.955 06:00:57 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:57.955 06:00:57 -- spdk/autorun.sh@1 -- $ timing_finish 00:44:57.955 06:00:57 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:44:57.955 06:00:57 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:57.955 06:00:57 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:57.955 06:00:57 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:57.955 + [[ -n 7539 ]] 00:44:57.955 + sudo kill 7539 00:44:57.965 [Pipeline] } 00:44:57.979 [Pipeline] // stage 00:44:57.984 [Pipeline] } 00:44:57.998 [Pipeline] // timeout 00:44:58.004 [Pipeline] } 00:44:58.018 [Pipeline] // catchError 00:44:58.023 [Pipeline] } 00:44:58.037 [Pipeline] // wrap 00:44:58.043 [Pipeline] } 00:44:58.055 [Pipeline] // catchError 00:44:58.063 [Pipeline] stage 00:44:58.065 [Pipeline] { (Epilogue) 00:44:58.074 [Pipeline] catchError 00:44:58.076 [Pipeline] { 00:44:58.086 [Pipeline] echo 00:44:58.087 Cleanup processes 00:44:58.091 [Pipeline] sh 00:44:58.373 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:58.373 673131 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:58.388 [Pipeline] sh 00:44:58.674 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:58.674 ++ grep -v 'sudo pgrep' 00:44:58.674 ++ awk '{print $1}' 00:44:58.674 + sudo kill -9 00:44:58.674 + true 00:44:58.685 [Pipeline] sh 00:44:58.970 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:11.185 [Pipeline] sh 00:45:11.469 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:11.469 Artifacts sizes are good 00:45:11.484 [Pipeline] archiveArtifacts 00:45:11.491 Archiving artifacts 00:45:11.918 [Pipeline] sh 00:45:12.330 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:12.345 [Pipeline] cleanWs 00:45:12.355 [WS-CLEANUP] Deleting project workspace... 00:45:12.355 [WS-CLEANUP] Deferred wipeout is used... 00:45:12.362 [WS-CLEANUP] done 00:45:12.364 [Pipeline] } 00:45:12.381 [Pipeline] // catchError 00:45:12.392 [Pipeline] sh 00:45:12.675 + logger -p user.info -t JENKINS-CI 00:45:12.684 [Pipeline] } 00:45:12.697 [Pipeline] // stage 00:45:12.702 [Pipeline] } 00:45:12.715 [Pipeline] // node 00:45:12.720 [Pipeline] End of Pipeline 00:45:12.778 Finished: SUCCESS